PR | shawnhanx: Migrate kubelet to use v1 Event API |
Result | FAILURE |
Tests | 9 failed / 950 succeeded |
Started | |
Elapsed | 55m7s |
Revision | |
Builder | b4a9a73a-c56a-11ed-8b47-528b871c14eb |
Refs |
master:fe91bc25 100600:2328465e |
control_plane_node_os_image | ubuntu-2204-jammy-v20220712a |
infra-commit | ade17619a |
job-version | v1.27.0-beta.0.24+d1921ebdb322e0 |
kubetest-version | v20230222-b5208facd4 |
repo | k8s.io/kubernetes |
repo-commit | d1921ebdb322e07afd66c403b98ad4913e417527 |
repos | {u'k8s.io/kubernetes': u'master:fe91bc257b505eb6057eb50b9c550a7c63e9fb91,100600:2328465ed9dbfd3ec68fffb03cdc4443951a9695', u'k8s.io/release': u'master'} |
revision | v1.27.0-beta.0.24+d1921ebdb322e0 |
worker_node_os_image | ubuntu-2204-jammy-v20220712a |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sEvents\sshould\sbe\ssent\sby\skubelets\sand\sthe\sscheduler\sabout\spods\sscheduling\sand\srunning$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/node/events.go:115 @ 03/18/23 09:32:55.662from junit_01.xml
> Enter [BeforeEach] [sig-node] Events - set up framework | framework.go:191 @ 03/18/23 09:27:50.971 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:27:50.971 Mar 18 09:27:50.971: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename events - test/e2e/framework/framework.go:250 @ 03/18/23 09:27:50.973 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:27:51.104 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:27:51.191 < Exit [BeforeEach] [sig-node] Events - set up framework | framework.go:191 @ 03/18/23 09:27:51.275 (304ms) > Enter [BeforeEach] [sig-node] Events - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:27:51.275 < Exit [BeforeEach] [sig-node] Events - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:27:51.275 (0s) > Enter [It] should be sent by kubelets and the scheduler about pods scheduling and running - test/e2e/node/events.go:41 @ 03/18/23 09:27:51.275 STEP: creating the pod - test/e2e/node/events.go:45 @ 03/18/23 09:27:51.275 STEP: submitting the pod to kubernetes - test/e2e/node/events.go:68 @ 03/18/23 09:27:51.275 STEP: verifying the pod is in kubernetes - test/e2e/node/events.go:79 @ 03/18/23 09:27:53.42 STEP: retrieving the pod - test/e2e/node/events.go:86 @ 03/18/23 09:27:53.464 Mar 18 09:27:53.536: INFO: &Pod{ObjectMeta:{send-events-ea072554-2752-48da-811e-c2c81f3284b2 events-6719 d01e8481-cacc-434a-bf5e-d3179c4a4473 14078 0 2023-03-18 09:27:51 +0000 UTC <nil> <nil> map[name:foo time:275116722] map[] [] [] [{e2e.test Update v1 2023-03-18 09:27:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-03-18 09:27:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2pb99,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2pb99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:e2e-9e86028ad1-674b9-minion-group-l6p2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-18 09:27:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-18 09:27:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-18 09:27:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-18 09:27:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.40.0.5,PodIP:10.64.3.227,StartTime:2023-03-18 09:27:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-18 09:27:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:containerd://ce8a0834cfa667c9d3dc056096b0163016c47fc3374f01364b87e6cc261afec3,Started:*true,AllocatedResources:ResourceList{},Resources:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.227,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,},} STEP: checking for scheduler event about the pod - test/e2e/node/events.go:94 @ 03/18/23 09:27:53.536 Mar 18 09:27:55.579: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod - test/e2e/node/events.go:114 @ 03/18/23 09:27:55.579 Automatically polling progress: [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running (Spec Runtime: 5m0.304s) test/e2e/node/events.go:41 In [It] (Node Runtime: 5m0s) test/e2e/node/events.go:41 At [By Step] checking for kubelet event about the pod (Step Runtime: 4m55.696s) test/e2e/node/events.go:114 Spec Goroutine goroutine 2592 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7275998, 0xc000132000}, 0xc0015c90b0, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7275998, 0xc000132000}, 0xc0?, 0x2bc7c05?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7275998, 0xc000132000}, 0x0?, 0xc005a15c10?, 0x21c74a7?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:85 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x6b641cc?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:66 > k8s.io/kubernetes/test/e2e/node.glob..func3.1({0x7f262c2653a0?, 0xc005b22870}) test/e2e/node/events.go:115 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc005b22870}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:32:55.662: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc0001c9bd0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/node/events.go:115 @ 03/18/23 09:32:55.662 < Exit [It] should be sent by kubelets and the scheduler about pods scheduling and running - test/e2e/node/events.go:41 @ 03/18/23 09:32:55.662 (5m4.388s) > Enter [AfterEach] [sig-node] Events - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:32:55.662 Mar 18 09:32:55.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Events - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:32:55.783 (121ms) > Enter [DeferCleanup (Each)] [sig-node] Events - test/e2e/node/events.go:69 @ 03/18/23 09:32:55.783 STEP: deleting the pod - test/e2e/node/events.go:70 @ 03/18/23 09:32:55.783 < Exit [DeferCleanup (Each)] [sig-node] Events - test/e2e/node/events.go:69 @ 03/18/23 09:32:55.836 (53ms) > Enter [DeferCleanup (Each)] [sig-node] Events - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:32:55.836 < Exit [DeferCleanup (Each)] [sig-node] Events - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:32:55.836 (0s) > Enter [DeferCleanup (Each)] [sig-node] Events - dump namespaces | framework.go:209 @ 03/18/23 09:32:55.836 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:32:55.836 STEP: Collecting events from namespace "events-6719". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:32:55.836 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:32:55.877 Mar 18 09:32:55.877: INFO: At 2023-03-18 09:27:51 +0000 UTC - event for send-events-ea072554-2752-48da-811e-c2c81f3284b2: {default-scheduler } Scheduled: Successfully assigned events-6719/send-events-ea072554-2752-48da-811e-c2c81f3284b2 to e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:32:55.920: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:32:55.920: INFO: send-events-ea072554-2752-48da-811e-c2c81f3284b2 e2e-9e86028ad1-674b9-minion-group-l6p2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:27:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:27:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:27:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:27:51 +0000 UTC }] Mar 18 09:32:55.920: INFO: Mar 18 09:32:56.062: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:32:56.155: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 14305 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:27:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:32:56.155: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:32:56.250: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:32:56.325: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:32:56.325: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:32:56.325: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:32:56.325: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:32:56.325: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:32:56.325: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:32:56.325: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:32:56.325: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:32:56.325: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:32:56.325: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:32:56.325: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.325: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:32:56.540: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:32:56.540: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:32:56.583: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 30537 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-expansion-6146":"e2e-9e86028ad1-674b9-minion-group-6qbb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:31:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:31:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:32:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:32:03 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:32:03 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:32:03 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:32:03 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:32:56.584: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:32:56.626: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:32:56.719: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:32:56.719: INFO: webserver-pod started at 2023-03-18 09:28:16 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container agnhost ready: false, restart count 0 Mar 18 09:32:56.719: INFO: foo-5r4mf started at 2023-03-18 09:32:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container c ready: true, restart count 0 Mar 18 09:32:56.719: INFO: netserver-0 started at 2023-03-18 09:32:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container webserver ready: false, restart count 0 Mar 18 09:32:56.719: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-5mnjh started at 2023-03-18 09:32:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:32:56.719: INFO: csi-mockplugin-0 started at 2023-03-18 09:32:47 +0000 UTC (0+3 container statuses recorded) Mar 18 09:32:56.719: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:32:56.719: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:32:56.719: INFO: Container mock ready: true, restart count 0 Mar 18 09:32:56.719: INFO: pvc-volume-tester-gg4w7 started at 2023-03-18 09:32:54 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container volume-tester ready: true, restart count 0 Mar 18 09:32:56.719: INFO: netserver-0 started at 2023-03-18 09:32:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:56.719: INFO: pod-subpath-test-preprovisionedpv-xt6l started at 2023-03-18 09:32:55 +0000 UTC (1+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Init container init-volume-preprovisionedpv-xt6l ready: false, restart count 0 Mar 18 09:32:56.719: INFO: Container test-container-subpath-preprovisionedpv-xt6l ready: false, restart count 0 Mar 18 09:32:56.719: INFO: pod1 started at 2023-03-18 09:32:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container container1 ready: true, restart count 0 Mar 18 09:32:56.719: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:32:56.719: INFO: busybox-ae5c04ae-70fe-416a-8e6f-23726e0785d1 started at 2023-03-18 09:32:14 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container busybox ready: true, restart count 0 Mar 18 09:32:56.719: INFO: netserver-0 started at 2023-03-18 09:32:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:56.719: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:32:56.719: INFO: busybox-49dd89e9-7b8f-42d9-badd-8ce6b328f9db started at 2023-03-18 09:32:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container busybox ready: true, restart count 0 Mar 18 09:32:56.719: INFO: csi-mockplugin-resizer-0 started at 2023-03-18 09:32:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:32:56.719: INFO: pod2 started at 2023-03-18 09:32:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container container1 ready: true, restart count 0 Mar 18 09:32:56.719: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:32:56.719: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:32:56.719: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:32:56.719: INFO: startup-cbe322cc-9c47-4991-912d-ad7a0e09a2cd started at 2023-03-18 09:32:11 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container busybox ready: true, restart count 0 Mar 18 09:32:56.719: INFO: liveness-6e42f777-5053-4740-856d-77370ed5796a started at 2023-03-18 09:31:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:32:56.719: INFO: foo-z7gdq started at 2023-03-18 09:32:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container c ready: true, restart count 0 Mar 18 09:32:56.719: INFO: ss2-0 started at 2023-03-18 09:32:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:56.719: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:57.042: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:32:57.042: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:32:57.096: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 28919 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:31:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:31:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:32:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:32:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:32:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:32:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:32:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:32:57.096: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:32:57.142: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:32:57.215: INFO: startup-script started at 2023-03-18 09:31:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container startup-script ready: true, restart count 0 Mar 18 09:32:57.215: INFO: netserver-1 started at 2023-03-18 09:32:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container webserver ready: false, restart count 0 Mar 18 09:32:57.215: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:32:57.215: INFO: ss2-1 started at 2023-03-18 09:32:38 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:57.215: INFO: webserver-pod started at 2023-03-18 09:30:10 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container agnhost ready: false, restart count 0 Mar 18 09:32:57.215: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container coredns ready: true, restart count 0 Mar 18 09:32:57.215: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:32:57.215: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:32:57.215: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:32:57.215: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:32:57.215: INFO: pod-csi-inline-volumes started at 2023-03-18 09:32:51 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container pod-csi-inline-volumes ready: false, restart count 0 Mar 18 09:32:57.215: INFO: send-events-ea072554-2752-48da-811e-c2c81f3284b2 started at 2023-03-18 09:27:51 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container p ready: true, restart count 0 Mar 18 09:32:57.215: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:32:57.215: INFO: netserver-1 started at 2023-03-18 09:32:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:57.215: INFO: exec-volume-test-preprovisionedpv-n288 started at 2023-03-18 09:32:55 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container exec-container-preprovisionedpv-n288 ready: false, restart count 0 Mar 18 09:32:57.215: INFO: sysctl-8cef6c45-e7a0-4261-bdbf-02719d33b51f started at 2023-03-18 09:29:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container test-container ready: false, restart count 0 Mar 18 09:32:57.215: INFO: test-container-pod started at 2023-03-18 09:32:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:57.215: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-lgmtn started at 2023-03-18 09:32:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:32:57.215: INFO: my-hostname-basic-5254a01a-0d75-42e8-89ff-31318c4c0e1c-57fwb started at 2023-03-18 09:32:53 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container my-hostname-basic-5254a01a-0d75-42e8-89ff-31318c4c0e1c ready: true, restart count 0 Mar 18 09:32:57.215: INFO: startup-f5b19b0b-a43f-4992-9848-32551cfe5c34 started at 2023-03-18 09:30:09 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container busybox ready: false, restart count 0 Mar 18 09:32:57.215: INFO: netserver-1 started at 2023-03-18 09:32:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container webserver ready: false, restart count 0 Mar 18 09:32:57.215: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:32:57.215: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:32:57.215: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:32:57.215: INFO: liveness-ed4151a6-b83b-4783-9e16-a1d6e037c6a1 started at 2023-03-18 09:30:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container agnhost-container ready: false, restart count 4 Mar 18 09:32:57.215: INFO: inline-volume-tester-rh5mw started at 2023-03-18 09:32:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:32:57.215: INFO: rs-9k2tj started at 2023-03-18 09:32:16 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container donothing ready: false, restart count 0 Mar 18 09:32:57.215: INFO: external-provisioner-hsd6d started at 2023-03-18 09:32:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.215: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:32:57.646: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:32:57.646: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:32:57.699: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 30758 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-read-write-once-pod-7422":"e2e-9e86028ad1-674b9-minion-group-s3x0","csi-mock-csi-mock-volumes-expansion-2073":"csi-mock-csi-mock-volumes-expansion-2073","csi-mock-csi-mock-volumes-workload-7352":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:31:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:32:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:32:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:31:31 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:31:31 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:31:31 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:31:31 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-read-write-once-pod-7422^9e275a46-c56f-11ed-8153-eefb10531377],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-read-write-once-pod-7422^9e275a46-c56f-11ed-8153-eefb10531377,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-expansion-2073^4,DevicePath:,},},Config:nil,},} Mar 18 09:32:57.699: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:32:57.744: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:32:57.821: INFO: netserver-2 started at 2023-03-18 09:32:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container webserver ready: false, restart count 0 Mar 18 09:32:57.821: INFO: pod-ephm-test-projected-czmf started at 2023-03-18 09:31:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container test-container-subpath-projected-czmf ready: false, restart count 0 Mar 18 09:32:57.821: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:32:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:32:57.821: INFO: netserver-2 started at 2023-03-18 09:32:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:57.821: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:32:57.821: INFO: pod-subpath-test-preprovisionedpv-bk5f started at 2023-03-18 09:32:54 +0000 UTC (2+2 container statuses recorded) Mar 18 09:32:57.821: INFO: Init container init-volume-preprovisionedpv-bk5f ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Init container test-init-subpath-preprovisionedpv-bk5f ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container test-container-subpath-preprovisionedpv-bk5f ready: false, restart count 0 Mar 18 09:32:57.821: INFO: Container test-container-volume-preprovisionedpv-bk5f ready: false, restart count 0 Mar 18 09:32:57.821: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:32:57.821: INFO: csi-mockplugin-0 started at 2023-03-18 09:32:52 +0000 UTC (0+3 container statuses recorded) Mar 18 09:32:57.821: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container mock ready: true, restart count 0 Mar 18 09:32:57.821: INFO: pod-eba73012-ac52-4fd8-9433-5e3755c6e150 started at 2023-03-18 09:31:24 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container write-pod ready: false, restart count 0 Mar 18 09:32:57.821: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:31:07 +0000 UTC (0+7 container statuses recorded) Mar 18 09:32:57.821: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:32:57.821: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-s3x0-5k992 started at 2023-03-18 09:32:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:32:57.821: INFO: concurrent-27985532-7xn9g started at 2023-03-18 09:32:00 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container c ready: true, restart count 0 Mar 18 09:32:57.821: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:32:57.821: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:32:57.821: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:32:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:32:57.821: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:32:57.821: INFO: netserver-2 started at 2023-03-18 09:32:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container webserver ready: false, restart count 0 Mar 18 09:32:57.821: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:32:57.821: INFO: pod-d2412de0-b366-4e9a-9eea-62c0f556acaa started at 2023-03-18 09:32:54 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:32:57.821: INFO: pvc-volume-tester-vshq5 started at 2023-03-18 09:32:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container volume-tester ready: false, restart count 0 Mar 18 09:32:57.821: INFO: pod-9c52b100-d6e7-444a-b48f-2643e3956bf6 started at 2023-03-18 09:31:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:32:57.821: INFO: pvc-volume-tester-xnjxt started at 2023-03-18 09:32:53 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container volume-tester ready: false, restart count 0 Mar 18 09:32:57.821: INFO: csi-mockplugin-0 started at 2023-03-18 09:32:41 +0000 UTC (0+4 container statuses recorded) Mar 18 09:32:57.821: INFO: Container busybox ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container mock ready: true, restart count 0 Mar 18 09:32:57.821: INFO: ss2-2 started at 2023-03-18 09:32:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container webserver ready: true, restart count 0 Mar 18 09:32:57.821: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:32:57.821: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-s3x0-gkj54 started at 2023-03-18 09:32:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container agnhost-container ready: false, restart count 0 Mar 18 09:32:57.821: INFO: netserver-2 started at 2023-03-18 09:32:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container webserver ready: false, restart count 0 Mar 18 09:32:57.821: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-s3x0-5p8vg started at 2023-03-18 09:32:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:32:57.821: INFO: csi-mockplugin-0 started at 2023-03-18 09:32:18 +0000 UTC (0+4 container statuses recorded) Mar 18 09:32:57.821: INFO: Container busybox ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:32:57.821: INFO: Container mock ready: true, restart count 0 Mar 18 09:32:57.821: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container coredns ready: true, restart count 0 Mar 18 09:32:57.821: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:32:57.821: INFO: csi-mockplugin-resizer-0 started at 2023-03-18 09:32:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:32:57.821: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:32:58.258: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:32:58.258 (2.422s) < Exit [DeferCleanup (Each)] [sig-node] Events - dump namespaces | framework.go:209 @ 03/18/23 09:32:58.258 (2.422s) > Enter [DeferCleanup (Each)] [sig-node] Events - tear down framework | framework.go:206 @ 03/18/23 09:32:58.258 STEP: Destroying namespace "events-6719" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:32:58.258 < Exit [DeferCleanup (Each)] [sig-node] Events - tear down framework | framework.go:206 @ 03/18/23 09:32:58.306 (48ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:32:58.306 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:32:58.306 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sExtended\sPod\sContainer\slifecycle\sshould\snot\screate\sextra\ssandbox\sif\sall\scontainers\sare\sdone$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/node/pods.go:278 @ 03/18/23 09:36:05.308from junit_01.xml
> Enter [BeforeEach] [sig-node] Pods Extended - set up framework | framework.go:191 @ 03/18/23 09:34:58.619 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:34:58.619 Mar 18 09:34:58.619: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename pods - test/e2e/framework/framework.go:250 @ 03/18/23 09:34:58.62 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:34:58.796 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:34:58.889 < Exit [BeforeEach] [sig-node] Pods Extended - set up framework | framework.go:191 @ 03/18/23 09:34:58.972 (353ms) > Enter [BeforeEach] [sig-node] Pods Extended - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:34:58.972 < Exit [BeforeEach] [sig-node] Pods Extended - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:34:58.972 (0s) > Enter [BeforeEach] Pod Container lifecycle - test/e2e/node/pods.go:230 @ 03/18/23 09:34:58.972 < Exit [BeforeEach] Pod Container lifecycle - test/e2e/node/pods.go:230 @ 03/18/23 09:34:58.972 (0s) > Enter [It] should not create extra sandbox if all containers are done - test/e2e/node/pods.go:234 @ 03/18/23 09:34:58.972 STEP: creating the pod that should always exit 0 - test/e2e/node/pods.go:235 @ 03/18/23 09:34:58.972 STEP: submitting the pod to kubernetes - test/e2e/node/pods.go:266 @ 03/18/23 09:34:58.972 STEP: Saw pod success - test/e2e/framework/pod/wait.go:409 @ 03/18/23 09:35:05.213 STEP: Getting events about the pod - test/e2e/node/pods.go:277 @ 03/18/23 09:35:05.213 Mar 18 09:36:05.308: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc0001c9bb0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/node/pods.go:278 @ 03/18/23 09:36:05.308 < Exit [It] should not create extra sandbox if all containers are done - test/e2e/node/pods.go:234 @ 03/18/23 09:36:05.308 (1m6.336s) > Enter [AfterEach] [sig-node] Pods Extended - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:36:05.308 Mar 18 09:36:05.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Pods Extended - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:36:05.428 (120ms) > Enter [DeferCleanup (Each)] Pod Container lifecycle - test/e2e/node/pods.go:268 @ 03/18/23 09:36:05.428 STEP: deleting the pod - test/e2e/node/pods.go:269 @ 03/18/23 09:36:05.428 < Exit [DeferCleanup (Each)] Pod Container lifecycle - test/e2e/node/pods.go:268 @ 03/18/23 09:36:05.496 (68ms) > Enter [DeferCleanup (Each)] [sig-node] Pods Extended - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:36:05.496 < Exit [DeferCleanup (Each)] [sig-node] Pods Extended - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:36:05.496 (0s) > Enter [DeferCleanup (Each)] [sig-node] Pods Extended - dump namespaces | framework.go:209 @ 03/18/23 09:36:05.496 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:36:05.496 STEP: Collecting events from namespace "pods-2891". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:36:05.496 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:36:05.537 Mar 18 09:36:05.537: INFO: At 2023-03-18 09:34:59 +0000 UTC - event for pod-always-succeede72e92f6-f55c-4754-b554-caa7f16f7caa: {default-scheduler } Scheduled: Successfully assigned pods-2891/pod-always-succeede72e92f6-f55c-4754-b554-caa7f16f7caa to e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:05.577: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:36:05.577: INFO: Mar 18 09:36:05.627: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:36:05.672: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 31205 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:36:05.672: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:36:05.713: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:36:05.765: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:36:05.765: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:36:05.765: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:36:05.765: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:36:05.765: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:36:05.765: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:05.765: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:05.765: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:05.765: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:36:05.765: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:36:05.765: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:05.765: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:36:05.960: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:36:05.960: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:06.007: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 40672 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-8337":"e2e-9e86028ad1-674b9-minion-group-6qbb","csi-mock-csi-mock-volumes-capacity-1909":"e2e-9e86028ad1-674b9-minion-group-6qbb","csi-mock-csi-mock-volumes-expansion-4044":"e2e-9e86028ad1-674b9-minion-group-6qbb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:31:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:35:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:35:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:35:48 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:35:48 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:35:48 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:35:48 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-8337^1fb5b3da-c570-11ed-8f4c-ae0ca589b39f kubernetes.io/csi/csi-mock-csi-mock-volumes-expansion-4044^33a907de-c570-11ed-ab04-f6602f593d98],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-expansion-4044^33a907de-c570-11ed-ab04-f6602f593d98,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-8337^1fb5b3da-c570-11ed-8f4c-ae0ca589b39f,DevicePath:,},},Config:nil,},} Mar 18 09:36:06.007: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:06.060: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:06.315: INFO: simpletest.rc-rmvs8 started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: pvc-volume-tester-ggk7n started at 2023-03-18 09:35:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container volume-tester ready: false, restart count 0 Mar 18 09:36:06.315: INFO: netserver-0 started at 2023-03-18 09:34:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:06.315: INFO: service-proxy-disabled-7d6t7 started at 2023-03-18 09:35:11 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 18 09:36:06.315: INFO: service-proxy-toggled-j6l84 started at 2023-03-18 09:35:14 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-bf8nb started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-tws7c started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-xnvhm started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: inline-volume-tester-d2s7v started at 2023-03-18 09:35:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:36:06.315: INFO: security-context-18b7be67-4539-498e-ba22-16735d27b9c6 started at 2023-03-18 09:36:05 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container test-container ready: false, restart count 0 Mar 18 09:36:06.315: INFO: service-headless-f7r95 started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container service-headless ready: true, restart count 0 Mar 18 09:36:06.315: INFO: pod-ephm-test-secret-tl6g started at 2023-03-18 09:35:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container test-container-subpath-secret-tl6g ready: false, restart count 0 Mar 18 09:36:06.315: INFO: verify-service-down-host-exec-pod started at 2023-03-18 09:36:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container agnhost-container ready: false, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-hkqz4 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-sppps started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-42hn5 started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-9kpx6 started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: external-provisioner-27w7b started at 2023-03-18 09:35:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:36:06.315: INFO: csi-mockplugin-resizer-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-cr7n6 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: pod-terminate-status-0-6 started at 2023-03-18 09:36:04 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Init container fail ready: false, restart count 0 Mar 18 09:36:06.315: INFO: Container blocked ready: false, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-rmlcw started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: external-provisioner-4ft2z started at 2023-03-18 09:35:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-pv62n started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: inline-volume-tester-d6p6f started at 2023-03-18 09:34:48 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-jfwf8 started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: annotationupdatebc4a1953-d7d8-4de5-aa79-78e766ff90bd started at 2023-03-18 09:36:02 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container client-container ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-cp56k started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-jzrf6 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-qdw2z started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: verify-service-down-host-exec-pod started at 2023-03-18 09:36:00 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-7pbrk started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-89429 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-tblqs started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: pod-projected-secrets-302e8300-9588-45fe-be70-8fb2b1c90201 started at 2023-03-18 09:35:53 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:06.315: INFO: Container creates-volume-test ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container dels-volume-test ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container upds-volume-test ready: true, restart count 0 Mar 18 09:36:06.315: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-w6qvh started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-j5fs6 started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-7cvkv started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:51 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-cl4qz started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-khlql started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-t69c8 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: startup-0b4d982f-143b-4436-b1c0-cd1d82a15c37 started at 2023-03-18 09:35:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container busybox ready: false, restart count 0 Mar 18 09:36:06.315: INFO: test-rs-v2ksg started at 2023-03-18 09:35:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:06.315: INFO: pvc-volume-tester-bd9hx started at 2023-03-18 09:35:23 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container volume-tester ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-6lkg8 started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:06.315: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-q27pc started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:15 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:06.315: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-g8j4b started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: pod-terminate-status-2-6 started at 2023-03-18 09:36:04 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Init container fail ready: false, restart count 0 Mar 18 09:36:06.315: INFO: Container blocked ready: false, restart count 0 Mar 18 09:36:06.315: INFO: liveness-6e42f777-5053-4740-856d-77370ed5796a started at 2023-03-18 09:31:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-97zj9 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-2jw7n started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-zwj5v started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: bin-false4265dcac-e701-419e-b3a6-237b38a55202 started at 2023-03-18 09:36:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container bin-false4265dcac-e701-419e-b3a6-237b38a55202 ready: false, restart count 0 Mar 18 09:36:06.315: INFO: pod-terminate-status-1-6 started at 2023-03-18 09:36:04 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Init container fail ready: false, restart count 0 Mar 18 09:36:06.315: INFO: Container blocked ready: false, restart count 0 Mar 18 09:36:06.315: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:06.315: INFO: service-headless-toggled-666m5 started at 2023-03-18 09:35:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 18 09:36:06.315: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:34:45 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:06.315: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:06.315: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-4hkx8 started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: exec-volume-test-dynamicpv-bsjt started at 2023-03-18 09:35:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container exec-container-dynamicpv-bsjt ready: false, restart count 0 Mar 18 09:36:06.315: INFO: failed-jobs-history-limit-27985536-frz7v started at 2023-03-18 09:36:00 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container c ready: false, restart count 1 Mar 18 09:36:06.315: INFO: busybox-07255980-96b4-4e2e-af19-fb342b60f84d started at 2023-03-18 09:33:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-rrs7l started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-nd6st started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:06.315: INFO: simpletest.rc-dl72m started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:06.315: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.098: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:07.098: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:07.160: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 41206 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-691":"e2e-9e86028ad1-674b9-minion-group-l6p2","csi-mock-csi-mock-volumes-capacity-8775":"e2e-9e86028ad1-674b9-minion-group-l6p2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:31:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:35:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:36:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:06 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:06 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:06 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:06 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:36:07.161: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:07.224: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:07.375: INFO: simpletest.rc-dfp2s started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-clsb9 started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-vt46x started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-q64df started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: service-headless-d5crn started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container service-headless ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-7tj5b started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-5njhs started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-lm9xw started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: service-proxy-toggled-fzkpp started at 2023-03-18 09:35:14 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 18 09:36:07.375: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-s4qxn started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-qklvp started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: rs-6b6m8 started at 2023-03-18 09:34:25 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container donothing ready: false, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-srplx started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:07.375: INFO: host-test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:07.375: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container coredns ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-jhf8x started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-c4ddd started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-hxj2m started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-kz7sl started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:07.375: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:07.375: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:35:13 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:07.375: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-wdwnx started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-xg65q started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-ddv8p started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-zq9gt started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-xpdjh started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:06 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:07.375: INFO: Container csi-attacher ready: false, restart count 0 Mar 18 09:36:07.375: INFO: Container csi-provisioner ready: false, restart count 0 Mar 18 09:36:07.375: INFO: Container csi-resizer ready: false, restart count 0 Mar 18 09:36:07.375: INFO: Container csi-snapshotter ready: false, restart count 0 Mar 18 09:36:07.375: INFO: Container hostpath ready: false, restart count 0 Mar 18 09:36:07.375: INFO: Container liveness-probe ready: false, restart count 0 Mar 18 09:36:07.375: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-26czd started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: test-rs-nz5q8 started at 2023-03-18 09:36:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:07.375: INFO: test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-mrmdm started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-gkb5p started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-dx7zv started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-hk5fw started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-gzm7f started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: webserver-pod started at 2023-03-18 09:34:23 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-jz8sw started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-lr85v started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-gm66v started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: netserver-1 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-qsm52 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-x9rkj started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-jxzvb started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: service-headless-toggled-2l42t started at 2023-03-18 09:35:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 18 09:36:07.375: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:07.375: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:36:07.375: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:15 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:07.375: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:07.375: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-2p98n started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.375: INFO: service-proxy-disabled-zw2hw started at 2023-03-18 09:35:11 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 18 09:36:07.375: INFO: simpletest.rc-ng7nm started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:07.375: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:07.902: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:07.902: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:07.947: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 41268 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-read-write-once-pod-7422":"e2e-9e86028ad1-674b9-minion-group-s3x0","csi-mock-csi-mock-volumes-fsgroup-mount-1546":"csi-mock-csi-mock-volumes-fsgroup-mount-1546","csi-mock-csi-mock-volumes-workload-1687":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:31:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:36:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:07 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:07 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:07 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:07 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-read-write-once-pod-7422^9e275a46-c56f-11ed-8153-eefb10531377],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-read-write-once-pod-7422^9e275a46-c56f-11ed-8153-eefb10531377,DevicePath:,},},Config:nil,},} Mar 18 09:36:07.948: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:07.989: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:08.241: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:35:46 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:08.241: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container coredns ready: true, restart count 0 Mar 18 09:36:08.241: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:36:08.241: INFO: test-rs-t2gf9 started at 2023-03-18 09:36:03 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:08.241: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container test-rs ready: true, restart count 0 Mar 18 09:36:08.241: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-8l6kp started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-pjfp8 started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-hqbml started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-z5swr started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-t28c7 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-k5b8p started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-zmv5z started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-hcvnf started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-s76ln started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: service-headless-6zb9p started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container service-headless ready: true, restart count 0 Mar 18 09:36:08.241: INFO: pod-eba73012-ac52-4fd8-9433-5e3755c6e150 started at 2023-03-18 09:31:24 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container write-pod ready: false, restart count 0 Mar 18 09:36:08.241: INFO: service-proxy-toggled-lb5p6 started at 2023-03-18 09:35:14 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-wg9dz started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:31:07 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:08.241: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-fl7hb started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:08.241: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-jfdr9 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-d4g8n started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-tdh2d started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-f7b5m started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-4kv24 started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-7v49p started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-x5mb4 started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-h6klq started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: pvc-volume-tester-2gmjs started at 2023-03-18 09:35:38 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container volume-tester ready: true, restart count 0 Mar 18 09:36:08.241: INFO: pod-9c52b100-d6e7-444a-b48f-2643e3956bf6 started at 2023-03-18 09:31:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-cc9vj started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-mnvdx started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-l45lg started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: pod-8f54d174-21fd-4a9c-ad85-47afd10762ab started at 2023-03-18 09:36:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-xjwnj started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-f2bwt started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-8k4tk started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: netserver-2 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-t8hs2 started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: service-proxy-disabled-gjvkb started at 2023-03-18 09:35:11 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-rrc8f started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-hbtws started at 2023-03-18 09:35:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-bstrc started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: explicit-root-uid started at 2023-03-18 09:36:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container explicit-root-uid ready: false, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-r47n9 started at 2023-03-18 09:35:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:08 +0000 UTC (0+4 container statuses recorded) Mar 18 09:36:08.241: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:08.241: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-s3x0-hxznr started at 2023-03-18 09:36:00 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:08.241: INFO: service-headless-toggled-4hm5z started at 2023-03-18 09:35:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 18 09:36:08.241: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-4tmzl started at 2023-03-18 09:35:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-pk4v2 started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.241: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:46 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:08.241: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:08.241: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:25 +0000 UTC (0+4 container statuses recorded) Mar 18 09:36:08.241: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:08.241: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:08.241: INFO: simpletest.rc-gzsgs started at 2023-03-18 09:35:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:08.241: INFO: Container nginx ready: true, restart count 0 Mar 18 09:36:08.700: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:36:08.7 (3.204s) < Exit [DeferCleanup (Each)] [sig-node] Pods Extended - dump namespaces | framework.go:209 @ 03/18/23 09:36:08.7 (3.204s) > Enter [DeferCleanup (Each)] [sig-node] Pods Extended - tear down framework | framework.go:206 @ 03/18/23 09:36:08.7 STEP: Destroying namespace "pods-2891" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:36:08.7 < Exit [DeferCleanup (Each)] [sig-node] Pods Extended - tear down framework | framework.go:206 @ 03/18/23 09:36:08.75 (49ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:36:08.75 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:36:08.75 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\snon\-local\sredirect\shttp\sliveness\sprobe$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/container_probe.go:311 @ 03/18/23 09:37:08.9from junit_01.xml
> Enter [BeforeEach] [sig-node] Probing container - set up framework | framework.go:191 @ 03/18/23 09:31:03.096 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:31:03.096 Mar 18 09:31:03.096: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename container-probe - test/e2e/framework/framework.go:250 @ 03/18/23 09:31:03.097 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:31:03.224 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:31:03.308 < Exit [BeforeEach] [sig-node] Probing container - set up framework | framework.go:191 @ 03/18/23 09:31:03.396 (300ms) > Enter [BeforeEach] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:31:03.396 < Exit [BeforeEach] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:31:03.396 (0s) > Enter [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:62 @ 03/18/23 09:31:03.396 < Exit [BeforeEach] [sig-node] Probing container - test/e2e/common/node/container_probe.go:62 @ 03/18/23 09:31:03.396 (0s) > Enter [It] should *not* be restarted with a non-local redirect http liveness probe - test/e2e/common/node/container_probe.go:296 @ 03/18/23 09:31:03.396 STEP: Creating pod liveness-6e42f777-5053-4740-856d-77370ed5796a in namespace container-probe-5215 - test/e2e/common/node/container_probe.go:955 @ 03/18/23 09:31:03.396 Mar 18 09:31:07.569: INFO: Started pod liveness-6e42f777-5053-4740-856d-77370ed5796a in namespace container-probe-5215 STEP: checking the pod's current state and verifying that restartCount is present - test/e2e/common/node/container_probe.go:966 @ 03/18/23 09:31:07.569 Mar 18 09:31:07.616: INFO: Initial restart count of pod liveness-6e42f777-5053-4740-856d-77370ed5796a is 0 Automatically polling progress: [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe (Spec Runtime: 5m0.301s) test/e2e/common/node/container_probe.go:296 In [It] (Node Runtime: 5m0s) test/e2e/common/node/container_probe.go:296 At [By Step] checking the pod's current state and verifying that restartCount is present (Step Runtime: 4m55.827s) test/e2e/common/node/container_probe.go:966 Spec Goroutine goroutine 3226 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc004630588, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f80001695b8, 0xc0050b9e60}, 0x18?, 0x18?, 0x72b2630?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc002ca6360?, 0x14?, 0xc0014940a0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:175 k8s.io/kubernetes/test/e2e/framework/events.WaitTimeoutForEvent({0x7f80001695b8, 0xc0050b9e60}, {0x72b2630?, 0xc001042b60?}, {0xc002ca6360?, 0xc001f20000?}, {0xc0014940a0?, 0x0?}, {0x6c429ef, 0x4f}, ...) test/e2e/framework/events/events.go:37 > k8s.io/kubernetes/test/e2e/common/node.glob..func2.14({0x7f80001695b8, 0xc0050b9e60}) test/e2e/common/node/container_probe.go:311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc0050b9e60}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Automatically polling progress: [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe (Spec Runtime: 5m20.302s) test/e2e/common/node/container_probe.go:296 In [It] (Node Runtime: 5m20.001s) test/e2e/common/node/container_probe.go:296 At [By Step] checking the pod's current state and verifying that restartCount is present (Step Runtime: 5m15.828s) test/e2e/common/node/container_probe.go:966 Spec Goroutine goroutine 3226 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc004630588, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f80001695b8, 0xc0050b9e60}, 0x18?, 0x18?, 0x72b2630?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc002ca6360?, 0x14?, 0xc0014940a0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:175 k8s.io/kubernetes/test/e2e/framework/events.WaitTimeoutForEvent({0x7f80001695b8, 0xc0050b9e60}, {0x72b2630?, 0xc001042b60?}, {0xc002ca6360?, 0xc001f20000?}, {0xc0014940a0?, 0x0?}, {0x6c429ef, 0x4f}, ...) test/e2e/framework/events/events.go:37 > k8s.io/kubernetes/test/e2e/common/node.glob..func2.14({0x7f80001695b8, 0xc0050b9e60}) test/e2e/common/node/container_probe.go:311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc0050b9e60}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Automatically polling progress: [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe (Spec Runtime: 5m40.304s) test/e2e/common/node/container_probe.go:296 In [It] (Node Runtime: 5m40.003s) test/e2e/common/node/container_probe.go:296 At [By Step] checking the pod's current state and verifying that restartCount is present (Step Runtime: 5m35.83s) test/e2e/common/node/container_probe.go:966 Spec Goroutine goroutine 3226 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc004630588, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f80001695b8, 0xc0050b9e60}, 0x18?, 0x18?, 0x72b2630?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc002ca6360?, 0x14?, 0xc0014940a0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:175 k8s.io/kubernetes/test/e2e/framework/events.WaitTimeoutForEvent({0x7f80001695b8, 0xc0050b9e60}, {0x72b2630?, 0xc001042b60?}, {0xc002ca6360?, 0xc001f20000?}, {0xc0014940a0?, 0x0?}, {0x6c429ef, 0x4f}, ...) test/e2e/framework/events/events.go:37 > k8s.io/kubernetes/test/e2e/common/node.glob..func2.14({0x7f80001695b8, 0xc0050b9e60}) test/e2e/common/node/container_probe.go:311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc0050b9e60}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Automatically polling progress: [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe (Spec Runtime: 6m0.305s) test/e2e/common/node/container_probe.go:296 In [It] (Node Runtime: 6m0.005s) test/e2e/common/node/container_probe.go:296 At [By Step] checking the pod's current state and verifying that restartCount is present (Step Runtime: 5m55.831s) test/e2e/common/node/container_probe.go:966 Spec Goroutine goroutine 3226 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc004630588, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f80001695b8, 0xc0050b9e60}, 0x18?, 0x18?, 0x72b2630?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7f80001695b8, 0xc0050b9e60}, 0xc002ca6360?, 0x14?, 0xc0014940a0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:175 k8s.io/kubernetes/test/e2e/framework/events.WaitTimeoutForEvent({0x7f80001695b8, 0xc0050b9e60}, {0x72b2630?, 0xc001042b60?}, {0xc002ca6360?, 0xc001f20000?}, {0xc0014940a0?, 0x0?}, {0x6c429ef, 0x4f}, ...) test/e2e/framework/events/events.go:37 > k8s.io/kubernetes/test/e2e/common/node.glob..func2.14({0x7f80001695b8, 0xc0050b9e60}) test/e2e/common/node/container_probe.go:311 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc0050b9e60}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:37:08.900: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc0001fdbe0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/container_probe.go:311 @ 03/18/23 09:37:08.9 < Exit [It] should *not* be restarted with a non-local redirect http liveness probe - test/e2e/common/node/container_probe.go:296 @ 03/18/23 09:37:08.9 (6m5.504s) > Enter [AfterEach] [sig-node] Probing container - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:37:08.9 Mar 18 09:37:08.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Probing container - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:37:09.02 (119ms) > Enter [DeferCleanup (Each)] [sig-node] Probing container - test/e2e/common/node/container_probe.go:951 @ 03/18/23 09:37:09.02 STEP: deleting the pod - test/e2e/common/node/container_probe.go:952 @ 03/18/23 09:37:09.02 < Exit [DeferCleanup (Each)] [sig-node] Probing container - test/e2e/common/node/container_probe.go:951 @ 03/18/23 09:37:09.07 (50ms) > Enter [DeferCleanup (Each)] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:37:09.07 < Exit [DeferCleanup (Each)] [sig-node] Probing container - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:37:09.07 (0s) > Enter [DeferCleanup (Each)] [sig-node] Probing container - dump namespaces | framework.go:209 @ 03/18/23 09:37:09.07 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:37:09.07 STEP: Collecting events from namespace "container-probe-5215". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:37:09.07 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:37:09.111 Mar 18 09:37:09.111: INFO: At 2023-03-18 09:31:03 +0000 UTC - event for liveness-6e42f777-5053-4740-856d-77370ed5796a: {default-scheduler } Scheduled: Successfully assigned container-probe-5215/liveness-6e42f777-5053-4740-856d-77370ed5796a to e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:37:09.153: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:37:09.153: INFO: Mar 18 09:37:09.202: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:37:09.245: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 31205 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:37:09.246: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:37:09.294: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:37:09.360: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:37:09.360: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:37:09.360: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:37:09.360: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:37:09.360: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:37:09.360: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:37:09.360: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:37:09.360: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:37:09.360: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:37:09.360: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:37:09.360: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.360: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:37:09.561: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:37:09.561: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:37:09.604: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 44200 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9073":"e2e-9e86028ad1-674b9-minion-group-6qbb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:36:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:36:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:59 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:59 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:59 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:59 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9073^64d0bc27-c570-11ed-9b77-8aea266b63a5 kubernetes.io/csi/csi-hostpath-ephemeral-9073^6ac6be21-c570-11ed-9b77-8aea266b63a5],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9073^64d0bc27-c570-11ed-9b77-8aea266b63a5,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9073^6ac6be21-c570-11ed-9b77-8aea266b63a5,DevicePath:,},},Config:nil,},} Mar 18 09:37:09.604: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:37:09.645: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:37:09.698: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:37:09.698: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:37:09.698: INFO: pod1 started at 2023-03-18 09:36:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:37:09.698: INFO: execpodlnjr5 started at 2023-03-18 09:36:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:37:09.698: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:37:09.698: INFO: busybox-07255980-96b4-4e2e-af19-fb342b60f84d started at 2023-03-18 09:33:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container busybox ready: true, restart count 0 Mar 18 09:37:09.698: INFO: netserver-0 started at 2023-03-18 09:34:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:09.698: INFO: test-container-pod started at 2023-03-18 09:36:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:09.698: INFO: pod-ephm-test-secret-tl6g started at 2023-03-18 09:35:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container test-container-subpath-secret-tl6g ready: false, restart count 0 Mar 18 09:37:09.698: INFO: inline-volume-tester2-42qsl started at 2023-03-18 09:36:54 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:37:09.698: INFO: netserver-0 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:09.698: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:41 +0000 UTC (0+7 container statuses recorded) Mar 18 09:37:09.698: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:37:09.698: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:37:09.698: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:37:09.698: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:37:09.698: INFO: inline-volume-tester-2xdh6 started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:09.698: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:37:09.965: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:37:09.965: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:37:10.007: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 44307 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9328":"e2e-9e86028ad1-674b9-minion-group-l6p2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:37:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:57 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:57 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:57 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:57 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9328^5033a820-c570-11ed-9318-269e8ba8d779],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9328^5033a820-c570-11ed-9318-269e8ba8d779,DevicePath:,},},Config:nil,},} Mar 18 09:37:10.007: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:37:10.052: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:37:10.106: INFO: rs-spjn5 started at 2023-03-18 09:36:18 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container donothing ready: false, restart count 0 Mar 18 09:37:10.106: INFO: pause-pod-1 started at 2023-03-18 09:36:20 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:37:10.106: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:37:10.106: INFO: host-test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:37:10.106: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container coredns ready: true, restart count 0 Mar 18 09:37:10.106: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:37:10.106: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:37:10.106: INFO: liveness-5bb71f0a-1103-443e-978a-d66becc64152 started at 2023-03-18 09:36:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:37:10.106: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:37:10.106: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:37:10.106: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:06 +0000 UTC (0+7 container statuses recorded) Mar 18 09:37:10.106: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:37:10.106: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:22 +0000 UTC (0+7 container statuses recorded) Mar 18 09:37:10.106: INFO: Container csi-attacher ready: false, restart count 0 Mar 18 09:37:10.106: INFO: Container csi-provisioner ready: false, restart count 0 Mar 18 09:37:10.106: INFO: Container csi-resizer ready: false, restart count 0 Mar 18 09:37:10.106: INFO: Container csi-snapshotter ready: false, restart count 0 Mar 18 09:37:10.106: INFO: Container hostpath ready: false, restart count 0 Mar 18 09:37:10.106: INFO: Container liveness-probe ready: false, restart count 0 Mar 18 09:37:10.106: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 18 09:37:10.106: INFO: test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:10.106: INFO: netserver-1 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:10.106: INFO: netserver-1 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:10.106: INFO: inline-volume-tester-cpczh started at 2023-03-18 09:36:09 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.106: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:37:10.106: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:37:10.106: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:37:10.106: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:37:10.370: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:37:10.370: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:37:10.417: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 44155 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-03-18 09:36:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-03-18 09:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-18 09:36:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:37:10.417: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:37:10.459: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:37:10.511: INFO: dns-test-b2664382-6712-4e98-88a1-df79367c4597 started at 2023-03-18 09:36:41 +0000 UTC (0+3 container statuses recorded) Mar 18 09:37:10.511: INFO: Container jessie-querier ready: true, restart count 0 Mar 18 09:37:10.511: INFO: Container querier ready: true, restart count 0 Mar 18 09:37:10.511: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:10.511: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:37:10.511: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:37:10.511: INFO: netserver-2 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:10.511: INFO: explicit-root-uid started at 2023-03-18 09:36:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container explicit-root-uid ready: false, restart count 0 Mar 18 09:37:10.511: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:37:10.511: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:37:10.511: INFO: netserver-2 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container webserver ready: true, restart count 0 Mar 18 09:37:10.511: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container coredns ready: true, restart count 0 Mar 18 09:37:10.511: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:37:10.511: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:37:10.511: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:37:10.511: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:37:10.511: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:37:10.511: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:37:10.760: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:37:10.76 (1.69s) < Exit [DeferCleanup (Each)] [sig-node] Probing container - dump namespaces | framework.go:209 @ 03/18/23 09:37:10.76 (1.69s) > Enter [DeferCleanup (Each)] [sig-node] Probing container - tear down framework | framework.go:206 @ 03/18/23 09:37:10.76 STEP: Destroying namespace "container-probe-5215" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:37:10.76 < Exit [DeferCleanup (Each)] [sig-node] Probing container - tear down framework | framework.go:206 @ 03/18/23 09:37:10.804 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:37:10.804 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:37:10.804 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swith\san\sexplicit\sroot\suser\sID\s\[LinuxOnly\]$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/security_context.go:406 @ 03/18/23 09:41:07.845from junit_01.xml
> Enter [BeforeEach] [sig-node] Security Context - set up framework | framework.go:191 @ 03/18/23 09:36:07.322 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:36:07.322 Mar 18 09:36:07.322: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename security-context-test - test/e2e/framework/framework.go:250 @ 03/18/23 09:36:07.323 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:36:07.513 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:36:07.607 < Exit [BeforeEach] [sig-node] Security Context - set up framework | framework.go:191 @ 03/18/23 09:36:07.698 (376ms) > Enter [BeforeEach] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:36:07.698 < Exit [BeforeEach] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:36:07.698 (0s) > Enter [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 @ 03/18/23 09:36:07.698 < Exit [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 @ 03/18/23 09:36:07.698 (0s) > Enter [It] should not run with an explicit root user ID [LinuxOnly] - test/e2e/common/node/security_context.go:398 @ 03/18/23 09:36:07.698 Automatically polling progress: [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly] (Spec Runtime: 5m0.377s) test/e2e/common/node/security_context.go:398 In [It] (Node Runtime: 5m0s) test/e2e/common/node/security_context.go:398 Spec Goroutine goroutine 4976 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f0dbc1df0b0, 0xc003aeccf0}, 0xc0027824b0, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f0dbc1df0b0, 0xc003aeccf0}, 0x70?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7f0dbc1df0b0, 0xc003aeccf0}, 0xc003994500?, 0x7f0dbc1df0b0?, 0xc003aeccf0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:85 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).WaitForErrorEventOrSuccess(0xc004a598d8?, {0x7f0dbc1df0b0?, 0xc003aeccf0?}, 0x2b?) test/e2e/framework/pod/pod_client.go:261 > k8s.io/kubernetes/test/e2e/common/node.glob..func21.4.3({0x7f0dbc1df0b0, 0xc003aeccf0}) test/e2e/common/node/security_context.go:405 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc003aeccf0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:41:07.844: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc0001fdbe0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/security_context.go:406 @ 03/18/23 09:41:07.845 < Exit [It] should not run with an explicit root user ID [LinuxOnly] - test/e2e/common/node/security_context.go:398 @ 03/18/23 09:41:07.845 (5m0.147s) > Enter [AfterEach] [sig-node] Security Context - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:41:07.845 Mar 18 09:41:07.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Security Context - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:41:07.964 (119ms) > Enter [DeferCleanup (Each)] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:41:07.964 < Exit [DeferCleanup (Each)] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:41:07.964 (0s) > Enter [DeferCleanup (Each)] [sig-node] Security Context - dump namespaces | framework.go:209 @ 03/18/23 09:41:07.964 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:41:07.964 STEP: Collecting events from namespace "security-context-test-3115". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:41:07.964 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:41:08.006 Mar 18 09:41:08.006: INFO: At 2023-03-18 09:36:07 +0000 UTC - event for explicit-root-uid: {default-scheduler } Scheduled: Successfully assigned security-context-test-3115/explicit-root-uid to e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:41:08.047: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:41:08.047: INFO: explicit-root-uid e2e-9e86028ad1-674b9-minion-group-s3x0 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:36:07 +0000 UTC ContainersNotReady containers with unready status: [explicit-root-uid]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:36:07 +0000 UTC ContainersNotReady containers with unready status: [explicit-root-uid]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:36:07 +0000 UTC }] Mar 18 09:41:08.047: INFO: Mar 18 09:41:08.107: INFO: Unable to fetch security-context-test-3115/explicit-root-uid/explicit-root-uid logs: the server rejected our request for an unknown reason (get pods explicit-root-uid) Mar 18 09:41:08.156: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:41:08.199: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 44648 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:38:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:38:10 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:41:08.199: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:41:08.240: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:41:08.299: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:41:08.299: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:41:08.299: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:41:08.299: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:41:08.299: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:41:08.299: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:41:08.299: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:41:08.299: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.299: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:41:08.299: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:41:08.299: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:41:08.299: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:41:08.492: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:41:08.492: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:41:08.535: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 44725 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:36:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:37:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:38:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:11 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:11 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:11 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:38:11 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:41:08.535: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:41:08.577: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:41:08.641: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.641: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:41:08.641: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.641: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:41:08.641: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:08.641: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:41:08.641: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:41:08.641: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:41:08.641: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:41:08.853: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:41:08.853: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:41:08.896: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 44777 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:38:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:38:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:38:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:41:08.897: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:41:08.939: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:41:09.012: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.012: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:41:09.012: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.012: INFO: Container coredns ready: true, restart count 0 Mar 18 09:41:09.012: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:41:09.012: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:41:09.012: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:41:09.012: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.012: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:41:09.012: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.012: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:41:09.012: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:41:09.012: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:41:09.012: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:41:09.229: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:41:09.229: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:41:09.272: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 44155 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-03-18 09:36:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-03-18 09:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-18 09:36:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:41:09.272: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:41:09.314: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:41:09.364: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:41:09.364: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:41:09.364: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:41:09.364: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:41:09.364: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:41:09.364: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:41:09.364: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:41:09.364: INFO: explicit-root-uid started at 2023-03-18 09:36:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container explicit-root-uid ready: false, restart count 0 Mar 18 09:41:09.364: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:41:09.364: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container coredns ready: true, restart count 0 Mar 18 09:41:09.364: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:41:09.364: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:41:09.605: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:41:09.605 (1.641s) < Exit [DeferCleanup (Each)] [sig-node] Security Context - dump namespaces | framework.go:209 @ 03/18/23 09:41:09.605 (1.641s) > Enter [DeferCleanup (Each)] [sig-node] Security Context - tear down framework | framework.go:206 @ 03/18/23 09:41:09.605 STEP: Destroying namespace "security-context-test-3115" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:41:09.605 < Exit [DeferCleanup (Each)] [sig-node] Security Context - tear down framework | framework.go:206 @ 03/18/23 09:41:09.649 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:41:09.649 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:41:09.649 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swithout\sa\sspecified\suser\sID$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/security_context.go:424 @ 03/18/23 09:28:45.291from junit_01.xml
> Enter [BeforeEach] [sig-node] Security Context - set up framework | framework.go:191 @ 03/18/23 09:23:44.396 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:23:44.396 Mar 18 09:23:44.396: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename security-context-test - test/e2e/framework/framework.go:250 @ 03/18/23 09:23:44.398 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:23:44.663 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:23:44.754 < Exit [BeforeEach] [sig-node] Security Context - set up framework | framework.go:191 @ 03/18/23 09:23:44.916 (520ms) > Enter [BeforeEach] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:23:44.916 < Exit [BeforeEach] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:23:44.916 (0s) > Enter [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 @ 03/18/23 09:23:44.916 < Exit [BeforeEach] [sig-node] Security Context - test/e2e/common/node/security_context.go:50 @ 03/18/23 09:23:44.916 (0s) > Enter [It] should not run without a specified user ID - test/e2e/common/node/security_context.go:418 @ 03/18/23 09:23:44.916 Automatically polling progress: [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID (Spec Runtime: 5m0.52s) test/e2e/common/node/security_context.go:418 In [It] (Node Runtime: 5m0s) test/e2e/common/node/security_context.go:418 Spec Goroutine goroutine 237 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7fa6f42b3030, 0xc002048570}, 0xc00250e138, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fa6f42b3030, 0xc002048570}, 0x50?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fa6f42b3030, 0xc002048570}, 0xc000516c00?, 0x7fa6f42b3030?, 0xc002048570?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:85 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).WaitForErrorEventOrSuccess(0xc001452210?, {0x7fa6f42b3030?, 0xc002048570?}, 0x2e?) test/e2e/framework/pod/pod_client.go:261 > k8s.io/kubernetes/test/e2e/common/node.glob..func21.4.5({0x7fa6f42b3030, 0xc002048570}) test/e2e/common/node/security_context.go:423 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc002048570}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:28:45.291: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc00017dbe0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/security_context.go:424 @ 03/18/23 09:28:45.291 < Exit [It] should not run without a specified user ID - test/e2e/common/node/security_context.go:418 @ 03/18/23 09:28:45.292 (5m0.376s) > Enter [AfterEach] [sig-node] Security Context - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:28:45.292 Mar 18 09:28:45.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Security Context - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:28:45.394 (102ms) > Enter [DeferCleanup (Each)] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:28:45.394 < Exit [DeferCleanup (Each)] [sig-node] Security Context - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:28:45.394 (0s) > Enter [DeferCleanup (Each)] [sig-node] Security Context - dump namespaces | framework.go:209 @ 03/18/23 09:28:45.394 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:28:45.394 STEP: Collecting events from namespace "security-context-test-8846". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:28:45.394 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:28:45.448 Mar 18 09:28:45.448: INFO: At 2023-03-18 09:23:45 +0000 UTC - event for implicit-root-uid: {default-scheduler } Scheduled: Successfully assigned security-context-test-8846/implicit-root-uid to e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:28:45.493: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:28:45.493: INFO: implicit-root-uid e2e-9e86028ad1-674b9-minion-group-6qbb Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:23:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:23:45 +0000 UTC ContainersNotReady containers with unready status: [implicit-root-uid]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:23:45 +0000 UTC ContainersNotReady containers with unready status: [implicit-root-uid]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:23:45 +0000 UTC }] Mar 18 09:28:45.493: INFO: Mar 18 09:28:45.593: INFO: Unable to fetch security-context-test-8846/implicit-root-uid/implicit-root-uid logs: the server rejected our request for an unknown reason (get pods implicit-root-uid) Mar 18 09:28:45.646: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:28:45.691: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 14305 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:27:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:28:45.691: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:28:45.736: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:28:45.809: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:28:45.809: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:45.809: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:28:45.809: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:28:45.809: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:28:45.809: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:28:45.809: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:28:45.809: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:28:45.809: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:28:45.809: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:28:45.809: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:45.809: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:28:46.031: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:28:46.031: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:28:46.084: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 15477 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1354":"e2e-9e86028ad1-674b9-minion-group-6qbb","csi-hostpath-ephemeral-8545":"e2e-9e86028ad1-674b9-minion-group-6qbb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:27:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2023-03-18 09:27:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-18 09:28:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:47 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:47 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:47 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:27:47 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-1354^0d38b337-c56f-11ed-9f24-060cb12868e0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-1354^0d38b337-c56f-11ed-9f24-060cb12868e0,DevicePath:,},},Config:nil,},} Mar 18 09:28:46.084: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:28:46.125: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:28:46.233: INFO: netserver-0 started at 2023-03-18 09:26:56 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container webserver ready: true, restart count 0 Mar 18 09:28:46.233: INFO: inline-volume-tester-7n6ks started at 2023-03-18 09:28:19 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:28:46.233: INFO: pod3 started at 2023-03-18 09:28:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container agnhost ready: true, restart count 0 Mar 18 09:28:46.233: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:28:46.233: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-gg4q5 started at 2023-03-18 09:28:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:28:46.233: INFO: pod-service-account-defaultsa-nomountspec started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:46.233: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:46.233: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:28:46.233: INFO: e2e-host-exec started at 2023-03-18 09:28:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container e2e-host-exec ready: true, restart count 0 Mar 18 09:28:46.233: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:28:46.233: INFO: webserver-pod started at 2023-03-18 09:28:16 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container agnhost ready: true, restart count 0 Mar 18 09:28:46.233: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-rnkx5 started at 2023-03-18 09:28:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container agnhost-container ready: false, restart count 0 Mar 18 09:28:46.233: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:27:04 +0000 UTC (0+7 container statuses recorded) Mar 18 09:28:46.233: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:28:46.233: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:28:19 +0000 UTC (0+7 container statuses recorded) Mar 18 09:28:46.233: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:28:46.233: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:28:46.233: INFO: pod-service-account-nomountsa-nomountspec started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:46.233: INFO: implicit-root-uid started at 2023-03-18 09:23:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container implicit-root-uid ready: false, restart count 0 Mar 18 09:28:46.233: INFO: inline-volume-tester-z6r4v started at 2023-03-18 09:27:08 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:28:46.233: INFO: pod1 started at 2023-03-18 09:28:24 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container agnhost ready: true, restart count 0 Mar 18 09:28:46.233: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:28:46.233: INFO: pod2 started at 2023-03-18 09:28:26 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.233: INFO: Container agnhost ready: true, restart count 0 Mar 18 09:28:46.509: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:28:46.509: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:28:46.554: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 16454 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2516":"e2e-9e86028ad1-674b9-minion-group-l6p2","csi-hostpath-volume-expand-3698":"e2e-9e86028ad1-674b9-minion-group-l6p2","csi-hostpath-volume-expand-3911":"e2e-9e86028ad1-674b9-minion-group-l6p2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:27:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:28:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:28:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:28:45 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:28:45 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:28:45 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:28:45 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-3698^463c1381-c56f-11ed-9483-4e3cf5283db7 kubernetes.io/csi/csi-hostpath-volume-expand-3911^3626015a-c56f-11ed-a885-7ecee8e0cded],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-3911^3626015a-c56f-11ed-a885-7ecee8e0cded,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-3698^463c1381-c56f-11ed-9483-4e3cf5283db7,DevicePath:,},},Config:nil,},} Mar 18 09:28:46.555: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:28:46.599: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:28:46.742: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.742: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:28:46.742: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.742: INFO: Container coredns ready: true, restart count 0 Mar 18 09:28:46.742: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:46.742: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:28:46.742: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.742: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:28:46.742: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.742: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:28:46.742: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:28:13 +0000 UTC (0+7 container statuses recorded) Mar 18 09:28:46.742: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:28:46.742: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:28:46.742: INFO: send-events-ea072554-2752-48da-811e-c2c81f3284b2 started at 2023-03-18 09:27:51 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container p ready: true, restart count 0 Mar 18 09:28:46.743: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-4c79f started at 2023-03-18 09:28:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:28:46.743: INFO: pod-ae80f202-c522-40ea-bbbe-9a1b4d81e561 started at 2023-03-18 09:28:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container write-pod ready: false, restart count 0 Mar 18 09:28:46.743: INFO: pod-f6c9eb86-d58e-4c45-8d77-c4fa3e3dc9a9 started at 2023-03-18 09:28:17 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:28:46.743: INFO: pause-pod-1 started at 2023-03-18 09:28:20 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:28:46.743: INFO: pod-ephm-test-configmap-g4kq started at 2023-03-18 09:27:13 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container test-container-subpath-configmap-g4kq ready: false, restart count 0 Mar 18 09:28:46.743: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-4fpgz started at 2023-03-18 09:28:42 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:28:46.743: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:46.743: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:28:46.743: INFO: netserver-1 started at 2023-03-18 09:26:56 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container webserver ready: true, restart count 0 Mar 18 09:28:46.743: INFO: startup-8f5bbedc-47d7-4e46-a6b3-3541a428c20e started at 2023-03-18 09:28:01 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container busybox ready: true, restart count 0 Mar 18 09:28:46.743: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:28:04 +0000 UTC (0+7 container statuses recorded) Mar 18 09:28:46.743: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:28:46.743: INFO: pod-a93c9d79-407a-41c6-a3f6-e7ac1b5c97cb started at 2023-03-18 09:28:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:28:46.743: INFO: test-container-pod started at 2023-03-18 09:27:17 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container webserver ready: true, restart count 0 Mar 18 09:28:46.743: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:28:40 +0000 UTC (0+7 container statuses recorded) Mar 18 09:28:46.743: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:28:46.743: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:28:46.743: INFO: inline-volume-tester-7rdhr started at 2023-03-18 09:28:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:46.743: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:28:47.177: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:28:47.177: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:28:47.224: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 16108 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-fsgroup-policy-1380":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-03-18 09:27:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-03-18 09:27:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-18 09:28:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:28:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:28:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:28:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:28:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:28:47.225: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:28:47.274: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:28:47.388: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container coredns ready: true, restart count 0 Mar 18 09:28:47.388: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:28:47.388: INFO: all-pods-removed-87747 started at 2023-03-18 09:28:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-configmaps-b7554d33-79ed-490f-9888-c3f92390bd07 started at 2023-03-18 09:28:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-handle-http-request started at 2023-03-18 09:28:46 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:47.388: INFO: Container container-handle-http-request ready: false, restart count 0 Mar 18 09:28:47.388: INFO: Container container-handle-https-request ready: false, restart count 0 Mar 18 09:28:47.388: INFO: sysctl-cb4a37ed-32a7-4e6c-a5b3-531e69be0a6d started at 2023-03-18 09:25:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container test-container ready: false, restart count 0 Mar 18 09:28:47.388: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-mountsa-nomountspec started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pfpod started at 2023-03-18 09:28:35 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:47.388: INFO: Container portforwardtester ready: false, restart count 0 Mar 18 09:28:47.388: INFO: Container readiness ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pod-failure-ignore-64tr7 started at 2023-03-18 09:28:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: false, restart count 0 Mar 18 09:28:47.388: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:28:47.388: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:28:47.388: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:28:47.388: INFO: netserver-2 started at 2023-03-18 09:26:56 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container webserver ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-failure-ignore-pwf7q started at 2023-03-18 09:28:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: false, restart count 0 Mar 18 09:28:47.388: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-nomountsa started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: true, restart count 0 Mar 18 09:28:47.388: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:28:47.388: INFO: all-pods-removed-4j4v7 started at 2023-03-18 09:28:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-failure-ignore-zr8fm started at 2023-03-18 09:28:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pod-subpath-test-configmap-vmx5 started at 2023-03-18 09:28:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container test-container-subpath-configmap-vmx5 ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-nomountsa-mountspec started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: ephemeral-containers-target-pod started at 2023-03-18 09:28:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container test-container-1 ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-projected-configmaps-544ad558-563f-4a82-bfeb-abd260ef4169 started at 2023-03-18 09:28:43 +0000 UTC (0+3 container statuses recorded) Mar 18 09:28:47.388: INFO: Container createcm-volume-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: Container delcm-volume-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: Container updcm-volume-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pod-failure-ignore-lv5gq started at 2023-03-18 09:28:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: false, restart count 0 Mar 18 09:28:47.388: INFO: csi-mockplugin-0 started at 2023-03-18 09:28:07 +0000 UTC (0+3 container statuses recorded) Mar 18 09:28:47.388: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:28:47.388: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:28:47.388: INFO: Container mock ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-mountsa-mountspec started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-defaultsa started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-defaultsa-mountspec started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:28:47.388: INFO: test-webserver-dd109698-b56f-424e-8f4f-795cd8dde139 started at 2023-03-18 09:26:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container test-webserver ready: true, restart count 0 Mar 18 09:28:47.388: INFO: pod-failure-ignore-2mj4b started at 2023-03-18 09:28:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container c ready: false, restart count 0 Mar 18 09:28:47.388: INFO: security-context-43d9d816-c113-4d14-88fb-e2a5536a82c1 started at 2023-03-18 09:28:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container test-container ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pod-service-account-mountsa started at 2023-03-18 09:28:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container token-test ready: false, restart count 0 Mar 18 09:28:47.388: INFO: pvc-volume-tester-wrnnz started at 2023-03-18 09:28:20 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container volume-tester ready: true, restart count 0 Mar 18 09:28:47.388: INFO: httpd started at 2023-03-18 09:28:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:28:47.388: INFO: Container httpd ready: true, restart count 0 Mar 18 09:28:47.905: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:28:47.905 (2.511s) < Exit [DeferCleanup (Each)] [sig-node] Security Context - dump namespaces | framework.go:209 @ 03/18/23 09:28:47.905 (2.511s) > Enter [DeferCleanup (Each)] [sig-node] Security Context - tear down framework | framework.go:206 @ 03/18/23 09:28:47.905 STEP: Destroying namespace "security-context-test-8846" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:28:47.905 < Exit [DeferCleanup (Each)] [sig-node] Security Context - tear down framework | framework.go:206 @ 03/18/23 09:28:47.962 (57ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:28:47.962 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:28:47.962 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sSysctls\s\[LinuxOnly\]\s\[NodeConformance\]\sshould\ssupport\ssysctls\s\[MinimumKubeletVersion\:1\.21\]\s\[Conformance\]$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/sysctl.go:97 @ 03/18/23 09:34:45.527from junit_01.xml
> Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:37 @ 03/18/23 09:29:45.014 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:37 @ 03/18/23 09:29:45.014 (0s) > Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - set up framework | framework.go:191 @ 03/18/23 09:29:45.014 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:29:45.014 Mar 18 09:29:45.014: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename sysctl - test/e2e/framework/framework.go:250 @ 03/18/23 09:29:45.016 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:29:45.15 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:29:45.23 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - set up framework | framework.go:191 @ 03/18/23 09:29:45.31 (296ms) > Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:29:45.31 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:29:45.31 (0s) > Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:67 @ 03/18/23 09:29:45.31 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:67 @ 03/18/23 09:29:45.31 (0s) > Enter [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:77 @ 03/18/23 09:29:45.31 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl - test/e2e/common/node/sysctl.go:89 @ 03/18/23 09:29:45.31 STEP: Watching for error events or started pod - test/e2e/common/node/sysctl.go:92 @ 03/18/23 09:29:45.354 Automatically polling progress: [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] (Spec Runtime: 5m0.296s) test/e2e/common/node/sysctl.go:77 In [It] (Node Runtime: 5m0s) test/e2e/common/node/sysctl.go:77 At [By Step] Watching for error events or started pod (Step Runtime: 4m59.956s) test/e2e/common/node/sysctl.go:92 Spec Goroutine goroutine 2726 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f5364284918, 0xc002ef5200}, 0xc00692d260, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f5364284918, 0xc002ef5200}, 0x18?, 0xc000301000?, 0xc001a130b0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7f5364284918, 0xc002ef5200}, 0xc002fe3401?, 0xc001a4de90?, 0x3c659e7?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:85 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).WaitForErrorEventOrSuccess(0xc00049e4b0?, {0x7f5364284918?, 0xc002ef5200?}, 0xc000b1f6c0?) test/e2e/framework/pod/pod_client.go:261 > k8s.io/kubernetes/test/e2e/common/node.glob..func22.4({0x7f5364284918, 0xc002ef5200}) test/e2e/common/node/sysctl.go:96 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc002ef5200}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:34:45.527: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc00017dbe0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/sysctl.go:97 @ 03/18/23 09:34:45.527 < Exit [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] - test/e2e/common/node/sysctl.go:77 @ 03/18/23 09:34:45.527 (5m0.217s) > Enter [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:34:45.527 Mar 18 09:34:45.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:34:45.663 (137ms) > Enter [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:34:45.663 < Exit [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:34:45.664 (0s) > Enter [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - dump namespaces | framework.go:209 @ 03/18/23 09:34:45.664 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:34:45.664 STEP: Collecting events from namespace "sysctl-9120". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:34:45.664 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:34:45.712 Mar 18 09:34:45.712: INFO: At 2023-03-18 09:29:45 +0000 UTC - event for sysctl-8cef6c45-e7a0-4261-bdbf-02719d33b51f: {default-scheduler } Scheduled: Successfully assigned sysctl-9120/sysctl-8cef6c45-e7a0-4261-bdbf-02719d33b51f to e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:34:45.771: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:34:45.771: INFO: sysctl-8cef6c45-e7a0-4261-bdbf-02719d33b51f e2e-9e86028ad1-674b9-minion-group-l6p2 Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:29:45 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:29:47 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:29:47 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:29:45 +0000 UTC }] Mar 18 09:34:45.771: INFO: Mar 18 09:34:45.924: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:34:45.979: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 31205 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:34:45.980: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:34:46.030: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:34:46.105: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:34:46.105: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:34:46.105: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:34:46.105: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:34:46.105: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:34:46.105: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:34:46.105: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:34:46.105: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:34:46.105: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:34:46.105: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:34:46.105: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.105: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:34:46.328: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:34:46.328: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:34:46.391: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 36554 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:31:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:34:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:34:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:36 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:36 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:36 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:34:36 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:34:46.391: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:34:46.446: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:34:46.563: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-vzst4 started at 2023-03-18 09:34:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:46.563: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:34:46.563: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:34:46.563: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:34:46.563: INFO: pause-pod-1 started at 2023-03-18 09:34:26 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:46.563: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-zftd8 started at 2023-03-18 09:34:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:46.563: INFO: liveness-6e42f777-5053-4740-856d-77370ed5796a started at 2023-03-18 09:31:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:46.563: INFO: pod-qos-class-995bafb1-dd28-40fe-8ba9-0cd098643b9c started at 2023-03-18 09:32:58 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost ready: false, restart count 0 Mar 18 09:34:46.563: INFO: nfs-server started at 2023-03-18 09:34:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container nfs-server ready: true, restart count 0 Mar 18 09:34:46.563: INFO: pod-2 started at 2023-03-18 09:34:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container donothing ready: true, restart count 0 Mar 18 09:34:46.563: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:34:46.563: INFO: adopt-release-kdjm4 started at 2023-03-18 09:34:08 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container c ready: true, restart count 0 Mar 18 09:34:46.563: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:34:45 +0000 UTC (0+7 container statuses recorded) Mar 18 09:34:46.563: INFO: Container csi-attacher ready: false, restart count 0 Mar 18 09:34:46.563: INFO: Container csi-provisioner ready: false, restart count 0 Mar 18 09:34:46.563: INFO: Container csi-resizer ready: false, restart count 0 Mar 18 09:34:46.563: INFO: Container csi-snapshotter ready: false, restart count 0 Mar 18 09:34:46.563: INFO: Container hostpath ready: false, restart count 0 Mar 18 09:34:46.563: INFO: Container liveness-probe ready: false, restart count 0 Mar 18 09:34:46.563: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 18 09:34:46.563: INFO: busybox-07255980-96b4-4e2e-af19-fb342b60f84d started at 2023-03-18 09:33:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container busybox ready: true, restart count 0 Mar 18 09:34:46.563: INFO: adopt-release-cjxxc started at 2023-03-18 09:34:08 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container c ready: true, restart count 0 Mar 18 09:34:46.563: INFO: netserver-0 started at 2023-03-18 09:34:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:46.563: INFO: inline-volume-kzglv started at 2023-03-18 09:32:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container volume-tester ready: false, restart count 0 Mar 18 09:34:46.563: INFO: inline-volume-bd77w started at 2023-03-18 09:33:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container volume-tester ready: false, restart count 0 Mar 18 09:34:46.563: INFO: pvc-volume-tester-writer-st6cm started at 2023-03-18 09:34:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container volume-tester ready: false, restart count 0 Mar 18 09:34:46.563: INFO: adopt-release-lc848 started at 2023-03-18 09:34:14 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container c ready: true, restart count 0 Mar 18 09:34:46.563: INFO: pod-0 started at 2023-03-18 09:34:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container donothing ready: true, restart count 0 Mar 18 09:34:46.563: INFO: execpodml4t8 started at 2023-03-18 09:34:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:46.563: INFO: test-container-pod started at 2023-03-18 09:33:18 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:46.563: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:34:46.563: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-2276n started at 2023-03-18 09:34:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:46.563: INFO: test-container-pod started at 2023-03-18 09:33:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:46.563: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:34:46.563: INFO: pod-1 started at 2023-03-18 09:34:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container donothing ready: true, restart count 0 Mar 18 09:34:46.563: INFO: externalip-test-gvlfp started at 2023-03-18 09:34:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container externalip-test ready: true, restart count 0 Mar 18 09:34:46.563: INFO: pod2 started at 2023-03-18 09:34:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:46.563: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.100: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:34:47.100: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:34:47.171: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 36459 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:31:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:34:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:34:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:18 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:33 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:33 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:33 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:34:33 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:34:47.171: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:34:47.223: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:34:47.402: INFO: external-provisioner-fgsxj started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container nfs-provisioner ready: false, restart count 0 Mar 18 09:34:47.402: INFO: pod-should-be-evicted9f5facef-a3ca-4c44-8d21-691b6f325305 started at 2023-03-18 09:34:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container bar ready: true, restart count 0 Mar 18 09:34:47.402: INFO: pod1 started at 2023-03-18 09:34:38 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: nfs-injector started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container nfs-injector ready: true, restart count 0 Mar 18 09:34:47.402: INFO: test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:47.402: INFO: sysctl-8cef6c45-e7a0-4261-bdbf-02719d33b51f started at 2023-03-18 09:29:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container test-container ready: false, restart count 0 Mar 18 09:34:47.402: INFO: busybox-251aa491-cfac-4819-a4e2-f680b0a26a7b started at 2023-03-18 09:34:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container busybox ready: true, restart count 0 Mar 18 09:34:47.402: INFO: webserver-pod started at 2023-03-18 09:34:23 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: pod-adoption started at 2023-03-18 09:34:42 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container pod-adoption ready: true, restart count 0 Mar 18 09:34:47.402: INFO: netserver-1 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:47.402: INFO: externalip-test-m69pb started at 2023-03-18 09:34:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container externalip-test ready: true, restart count 0 Mar 18 09:34:47.402: INFO: pod-subpath-test-preprovisionedpv-vqg2 started at 2023-03-18 09:34:40 +0000 UTC (1+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Init container init-volume-preprovisionedpv-vqg2 ready: true, restart count 0 Mar 18 09:34:47.402: INFO: Container test-container-subpath-preprovisionedpv-vqg2 ready: true, restart count 0 Mar 18 09:34:47.402: INFO: external-provisioner-xzz5h started at 2023-03-18 09:34:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:34:47.402: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:34:47.402: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:34:47.402: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:34:47.402: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-h8p8x started at 2023-03-18 09:34:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: execpodt596c started at 2023-03-18 09:34:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: external-provisioner-jsvv5 started at 2023-03-18 09:34:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:34:47.402: INFO: test-deployment-58db457f5f-dmf25 started at 2023-03-18 09:34:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container test-deployment ready: false, restart count 0 Mar 18 09:34:47.402: INFO: netserver-1 started at 2023-03-18 09:32:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:47.402: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:34:47.402: INFO: host-test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: netserver-1 started at 2023-03-18 09:32:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:47.402: INFO: rs-6b6m8 started at 2023-03-18 09:34:25 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container donothing ready: false, restart count 0 Mar 18 09:34:47.402: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:34:47.402: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:34:47.402: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:34:47.402: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-ffl2b started at 2023-03-18 09:34:42 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: webserver-pod started at 2023-03-18 09:30:10 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container agnhost ready: false, restart count 0 Mar 18 09:34:47.402: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container coredns ready: true, restart count 0 Mar 18 09:34:47.402: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:34:47.402: INFO: labelsupdate35aae371-e964-4b3a-8e01-ff30edbb8748 started at 2023-03-18 09:34:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container client-container ready: true, restart count 0 Mar 18 09:34:47.402: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:34:47.402: INFO: pod-csi-inline-volumes started at 2023-03-18 09:32:51 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:47.402: INFO: Container pod-csi-inline-volumes ready: false, restart count 0 Mar 18 09:34:47.834: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:34:47.834: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:34:47.886: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 36776 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-read-write-once-pod-7422":"e2e-9e86028ad1-674b9-minion-group-s3x0","csi-mock-csi-mock-volumes-workload-7352":"e2e-9e86028ad1-674b9-minion-group-s3x0","csi-mock-csi-mock-volumes-workload-9649":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:31:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:34:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:31:20 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:25 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:25 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:34:25 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:34:25 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-read-write-once-pod-7422^9e275a46-c56f-11ed-8153-eefb10531377 kubernetes.io/csi/csi-mock-csi-mock-volumes-workload-9649^edc0e750-c56f-11ed-83c2-fee0a4cf756d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-read-write-once-pod-7422^9e275a46-c56f-11ed-8153-eefb10531377,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-workload-9649^edc0e750-c56f-11ed-83c2-fee0a4cf756d,DevicePath:,},},Config:nil,},} Mar 18 09:34:47.886: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:34:47.935: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:34:48.042: INFO: pvc-volume-tester-f6hmh started at 2023-03-18 09:33:26 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container volume-tester ready: true, restart count 0 Mar 18 09:34:48.042: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container coredns ready: true, restart count 0 Mar 18 09:34:48.042: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:34:48.042: INFO: netserver-2 started at 2023-03-18 09:32:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:48.042: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:32:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:34:48.042: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:34:48.042: INFO: test-deployment-58db457f5f-rv6vp started at 2023-03-18 09:34:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container test-deployment ready: false, restart count 0 Mar 18 09:34:48.042: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:34:48.042: INFO: csi-mockplugin-0 started at 2023-03-18 09:32:52 +0000 UTC (0+3 container statuses recorded) Mar 18 09:34:48.042: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container mock ready: true, restart count 0 Mar 18 09:34:48.042: INFO: pod-eba73012-ac52-4fd8-9433-5e3755c6e150 started at 2023-03-18 09:31:24 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container write-pod ready: false, restart count 0 Mar 18 09:34:48.042: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:31:07 +0000 UTC (0+7 container statuses recorded) Mar 18 09:34:48.042: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:34:48.042: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:33:18 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:34:48.042: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:34:48.042: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:34:48.042: INFO: csi-mockplugin-0 started at 2023-03-18 09:33:51 +0000 UTC (0+4 container statuses recorded) Mar 18 09:34:48.042: INFO: Container busybox ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container mock ready: true, restart count 0 Mar 18 09:34:48.042: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:34:48.042: INFO: netserver-2 started at 2023-03-18 09:32:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:48.042: INFO: pvc-volume-tester-vshq5 started at 2023-03-18 09:32:57 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container volume-tester ready: true, restart count 0 Mar 18 09:34:48.042: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:34:48.042: INFO: pod-9c52b100-d6e7-444a-b48f-2643e3956bf6 started at 2023-03-18 09:31:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:34:48.042: INFO: netserver-2 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container webserver ready: true, restart count 0 Mar 18 09:34:48.042: INFO: csi-mockplugin-0 started at 2023-03-18 09:33:18 +0000 UTC (0+3 container statuses recorded) Mar 18 09:34:48.042: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:34:48.042: INFO: Container mock ready: true, restart count 0 Mar 18 09:34:48.042: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:34:48.042: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:34:48.395: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:34:48.395 (2.732s) < Exit [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - dump namespaces | framework.go:209 @ 03/18/23 09:34:48.395 (2.732s) > Enter [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - tear down framework | framework.go:206 @ 03/18/23 09:34:48.395 STEP: Destroying namespace "sysctl-9120" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:34:48.396 < Exit [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - tear down framework | framework.go:206 @ 03/18/23 09:34:48.454 (59ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:34:48.454 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:34:48.454 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sSysctls\s\[LinuxOnly\]\s\[NodeConformance\]\sshould\ssupport\ssysctls\swith\sslashes\sas\sseparator\s\[MinimumKubeletVersion\:1\.23\]$'
[FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/sysctl.go:206 @ 03/18/23 09:30:36.652from junit_01.xml
> Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:37 @ 03/18/23 09:25:36.18 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:37 @ 03/18/23 09:25:36.18 (0s) > Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - set up framework | framework.go:191 @ 03/18/23 09:25:36.18 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:25:36.18 Mar 18 09:25:36.180: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename sysctl - test/e2e/framework/framework.go:250 @ 03/18/23 09:25:36.181 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:25:36.332 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:25:36.424 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - set up framework | framework.go:191 @ 03/18/23 09:25:36.51 (330ms) > Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:25:36.51 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:25:36.51 (0s) > Enter [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:67 @ 03/18/23 09:25:36.51 < Exit [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/common/node/sysctl.go:67 @ 03/18/23 09:25:36.51 (0s) > Enter [It] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23] - test/e2e/common/node/sysctl.go:186 @ 03/18/23 09:25:36.51 STEP: Creating a pod with the kernel/shm_rmid_forced sysctl - test/e2e/common/node/sysctl.go:198 @ 03/18/23 09:25:36.51 STEP: Watching for error events or started pod - test/e2e/common/node/sysctl.go:201 @ 03/18/23 09:25:36.569 Automatically polling progress: [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23] (Spec Runtime: 5m0.33s) test/e2e/common/node/sysctl.go:186 In [It] (Node Runtime: 5m0s) test/e2e/common/node/sysctl.go:186 At [By Step] Watching for error events or started pod (Step Runtime: 4m59.941s) test/e2e/common/node/sysctl.go:201 Spec Goroutine goroutine 714 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f09707e73b0, 0xc0038bac60}, 0xc000b8aed0, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f09707e73b0, 0xc0038bac60}, 0x18?, 0xc001c98000?, 0xc0013c7bc0?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7f09707e73b0, 0xc0038bac60}, 0xc000bbfe01?, 0xc001e7de90?, 0x3c659e7?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:85 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).WaitForErrorEventOrSuccess(0xc000f284b0?, {0x7f09707e73b0?, 0xc0038bac60?}, 0xc0047e4820?) test/e2e/framework/pod/pod_client.go:261 > k8s.io/kubernetes/test/e2e/common/node.glob..func22.7({0x7f09707e73b0, 0xc0038bac60}) test/e2e/common/node/sysctl.go:205 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc0038bac60}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:30:36.651: INFO: Unexpected error: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc00017dbe0>{ s: "timed out waiting for the condition", }, } [FAILED] timed out waiting for the condition In [It] at: test/e2e/common/node/sysctl.go:206 @ 03/18/23 09:30:36.652 < Exit [It] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23] - test/e2e/common/node/sysctl.go:186 @ 03/18/23 09:30:36.652 (5m0.142s) > Enter [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:30:36.652 Mar 18 09:30:36.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:30:36.787 (136ms) > Enter [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:30:36.787 < Exit [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:30:36.787 (0s) > Enter [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - dump namespaces | framework.go:209 @ 03/18/23 09:30:36.787 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:30:36.787 STEP: Collecting events from namespace "sysctl-2114". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:30:36.788 STEP: Found 1 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:30:36.913 Mar 18 09:30:36.913: INFO: At 2023-03-18 09:25:36 +0000 UTC - event for sysctl-cb4a37ed-32a7-4e6c-a5b3-531e69be0a6d: {default-scheduler } Scheduled: Successfully assigned sysctl-2114/sysctl-cb4a37ed-32a7-4e6c-a5b3-531e69be0a6d to e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:30:36.961: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:30:36.961: INFO: sysctl-cb4a37ed-32a7-4e6c-a5b3-531e69be0a6d e2e-9e86028ad1-674b9-minion-group-s3x0 Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:25:36 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:25:36 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:25:36 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:25:36 +0000 UTC }] Mar 18 09:30:36.961: INFO: Mar 18 09:30:37.105: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:30:37.166: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 14305 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:27:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:27:58 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:30:37.167: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:30:37.291: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:30:37.456: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:30:37.456: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:30:37.456: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:30:37.456: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:30:37.456: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:30:37.456: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:30:37.456: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:30:37.456: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:30:37.456: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:30:37.456: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:30:37.456: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.456: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:30:37.732: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:30:37.732: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:30:37.789: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 20241 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:27:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:29:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:29:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:54 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:29:50 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:29:50 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:29:50 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:29:50 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:30:37.790: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:30:37.845: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:30:37.970: INFO: external-provisioner-kg2wk started at 2023-03-18 09:30:26 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:30:37.970: INFO: successful-jobs-history-limit-27985530-q4h8f started at 2023-03-18 09:30:00 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container c ready: false, restart count 0 Mar 18 09:30:37.970: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-vmktp started at 2023-03-18 09:30:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pod-subpath-test-inlinevolume-f4gn started at 2023-03-18 09:30:35 +0000 UTC (2+2 container statuses recorded) Mar 18 09:30:37.970: INFO: Init container init-volume-inlinevolume-f4gn ready: true, restart count 0 Mar 18 09:30:37.970: INFO: Init container test-init-subpath-inlinevolume-f4gn ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container test-container-subpath-inlinevolume-f4gn ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container test-container-volume-inlinevolume-f4gn ready: false, restart count 0 Mar 18 09:30:37.970: INFO: nfs-server started at 2023-03-18 09:30:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container nfs-server ready: true, restart count 0 Mar 18 09:30:37.970: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:30:37 +0000 UTC (0+7 container statuses recorded) Mar 18 09:30:37.970: INFO: Container csi-attacher ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container csi-provisioner ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container csi-resizer ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container csi-snapshotter ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container hostpath ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container liveness-probe ready: false, restart count 0 Mar 18 09:30:37.970: INFO: Container node-driver-registrar ready: false, restart count 0 Mar 18 09:30:37.970: INFO: httpd-deployment-55bcbdf6f4-pnzcb started at 2023-03-18 09:30:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container httpd ready: true, restart count 0 Mar 18 09:30:37.970: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pod-prestop-hook-361791a4-5a0c-4624-8023-e1f35b5df228 started at 2023-03-18 09:30:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container nginx ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pod-secrets-43006392-7b9d-4d83-be0d-8bdd97591fc3 started at 2023-03-18 09:30:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container secret-volume-test ready: false, restart count 0 Mar 18 09:30:37.970: INFO: ss2-0 started at 2023-03-18 09:30:09 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container webserver ready: true, restart count 0 Mar 18 09:30:37.970: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pod-1f3d7d20-50a3-4797-ada6-8781efcdea8d started at 2023-03-18 09:30:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:30:37.970: INFO: ss2-0 started at 2023-03-18 09:30:20 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container webserver ready: true, restart count 0 Mar 18 09:30:37.970: INFO: external-provisioner-jjt8q started at 2023-03-18 09:30:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pod-client started at 2023-03-18 09:30:11 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container pod-client ready: true, restart count 0 Mar 18 09:30:37.970: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:30:37.970: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:30:37.970: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:30:37.970: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-6qbb-g6mdg started at 2023-03-18 09:30:24 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pod-bf2ad7d4-7b23-4aca-8dd8-c6aa3303bf12 started at 2023-03-18 09:30:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container write-pod ready: true, restart count 0 Mar 18 09:30:37.970: INFO: pause-pod-1 started at 2023-03-18 09:30:12 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:37.970: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:30:37.970: INFO: webserver-pod started at 2023-03-18 09:28:16 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:37.970: INFO: Container agnhost ready: false, restart count 0 Mar 18 09:30:38.368: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:30:38.368: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:30:38.411: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 22268 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:27:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:29:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:30:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-03-18 09:27:55 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:30:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:30:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:30:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:30:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:30:38.411: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:30:38.454: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:30:38.542: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:30:38.542: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:30:38.542: INFO: send-events-ea072554-2752-48da-811e-c2c81f3284b2 started at 2023-03-18 09:27:51 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container p ready: true, restart count 0 Mar 18 09:30:38.542: INFO: httpd-deployment-55bcbdf6f4-f8r2j started at 2023-03-18 09:30:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container httpd ready: true, restart count 0 Mar 18 09:30:38.542: INFO: local-client started at 2023-03-18 09:30:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container local-client ready: true, restart count 0 Mar 18 09:30:38.542: INFO: pod-handle-http-request started at 2023-03-18 09:30:35 +0000 UTC (0+2 container statuses recorded) Mar 18 09:30:38.542: INFO: Container container-handle-http-request ready: true, restart count 0 Mar 18 09:30:38.542: INFO: Container container-handle-https-request ready: true, restart count 0 Mar 18 09:30:38.542: INFO: sysctl-8cef6c45-e7a0-4261-bdbf-02719d33b51f started at 2023-03-18 09:29:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container test-container ready: false, restart count 0 Mar 18 09:30:38.542: INFO: startup-f5b19b0b-a43f-4992-9848-32551cfe5c34 started at 2023-03-18 09:30:09 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container busybox ready: false, restart count 0 Mar 18 09:30:38.542: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:30:38.542: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:30:38.542: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:30:38.542: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-crx2f started at 2023-03-18 09:30:11 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:38.542: INFO: inline-volume-tester2-s2zg4 started at 2023-03-18 09:30:19 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:30:38.542: INFO: inline-volume-tester-x4m9w started at 2023-03-18 09:30:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:30:38.542: INFO: ss2-1 started at 2023-03-18 09:30:16 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container webserver ready: true, restart count 0 Mar 18 09:30:38.542: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-m99qt started at 2023-03-18 09:30:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:38.542: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:30:38.542: INFO: ss2-1 started at 2023-03-18 09:30:21 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container webserver ready: false, restart count 0 Mar 18 09:30:38.542: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container coredns ready: true, restart count 0 Mar 18 09:30:38.542: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-9zcvt started at 2023-03-18 09:30:24 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:38.542: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:30:38.542: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:30:38.542: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:30:38.542: INFO: pod-configmaps-fe23b090-5b5d-4d50-b7e8-5c0073230f45 started at 2023-03-18 09:29:50 +0000 UTC (0+3 container statuses recorded) Mar 18 09:30:38.542: INFO: Container createcm-volume-test ready: true, restart count 0 Mar 18 09:30:38.542: INFO: Container delcm-volume-test ready: true, restart count 0 Mar 18 09:30:38.542: INFO: Container updcm-volume-test ready: true, restart count 0 Mar 18 09:30:38.542: INFO: webserver-pod started at 2023-03-18 09:30:10 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:38.542: INFO: Container agnhost ready: true, restart count 0 Mar 18 09:30:38.846: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:30:38.846: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:30:38.888: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 22044 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-657":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:27:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:30:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:30:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:27:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:30:30 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:30:30 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:30:30 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:30:30 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-657^78166178-c56f-11ed-893a-fec4fee75d99],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-657^78166178-c56f-11ed-893a-fec4fee75d99,DevicePath:,},},Config:nil,},} Mar 18 09:30:38.888: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:30:38.942: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:30:39.047: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:30:39.047: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:30:39.047: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:30:39.047: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-s3x0-c89rk started at 2023-03-18 09:30:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:39.047: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:30:39.047: INFO: server started at 2023-03-18 09:30:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:39.047: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:30:39.047: INFO: ss2-2 started at 2023-03-18 09:30:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container webserver ready: true, restart count 0 Mar 18 09:30:39.047: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:30:39.047: INFO: pod-update-e6a97e06-19a2-4486-8996-c46586cce3f9 started at 2023-03-18 09:30:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container pause ready: true, restart count 0 Mar 18 09:30:39.047: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:30:04 +0000 UTC (0+7 container statuses recorded) Mar 18 09:30:39.047: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:30:39.047: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:30:39.047: INFO: pod-server-2 started at 2023-03-18 09:30:26 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:30:39.047: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container coredns ready: true, restart count 0 Mar 18 09:30:39.047: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:30:39.047: INFO: pod-with-prestop-exec-hook started at 2023-03-18 09:30:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container pod-with-prestop-exec-hook ready: true, restart count 0 Mar 18 09:30:39.047: INFO: ss2-2 started at 2023-03-18 09:30:23 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.047: INFO: Container webserver ready: true, restart count 0 Mar 18 09:30:39.048: INFO: sysctl-cb4a37ed-32a7-4e6c-a5b3-531e69be0a6d started at 2023-03-18 09:25:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.048: INFO: Container test-container ready: false, restart count 0 Mar 18 09:30:39.048: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.048: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:30:39.048: INFO: httpd-deployment-5cd84d4f9-f87tq started at 2023-03-18 09:30:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:30:39.048: INFO: Container httpd ready: true, restart count 0 Mar 18 09:30:39.343: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:30:39.343 (2.555s) < Exit [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - dump namespaces | framework.go:209 @ 03/18/23 09:30:39.343 (2.556s) > Enter [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - tear down framework | framework.go:206 @ 03/18/23 09:30:39.343 STEP: Destroying namespace "sysctl-2114" for this suite. - test/e2e/framework/framework.go:351 @ 03/18/23 09:30:39.343 < Exit [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] - tear down framework | framework.go:206 @ 03/18/23 09:30:39.396 (53ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:30:39.397 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:30:39.397 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sread\-write\-once\-pod\sshould\sblock\sa\ssecond\spod\sfrom\susing\san\sin\-use\sReadWriteOncePod\svolume\son\sthe\ssame\snode$'
[FAILED] failed to wait for FailedMount event for pod2: timed out waiting for the condition In [It] at: test/e2e/storage/testsuites/readwriteoncepod.go:232 @ 03/18/23 09:36:24.719from junit_01.xml
> Enter [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/framework/testsuite.go:51 @ 03/18/23 09:31:04.721 < Exit [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/framework/testsuite.go:51 @ 03/18/23 09:31:04.721 (0s) > Enter [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - set up framework | framework.go:191 @ 03/18/23 09:31:04.721 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/18/23 09:31:04.721 Mar 18 09:31:04.721: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename read-write-once-pod - test/e2e/framework/framework.go:250 @ 03/18/23 09:31:04.722 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/18/23 09:31:04.918 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/18/23 09:31:05.009 < Exit [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - set up framework | framework.go:191 @ 03/18/23 09:31:05.137 (415ms) > Enter [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:31:05.137 < Exit [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/framework/metrics/init/init.go:33 @ 03/18/23 09:31:05.137 (0s) > Enter [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/testsuites/readwriteoncepod.go:127 @ 03/18/23 09:31:05.137 STEP: Building a driver namespace object, basename read-write-once-pod-7422 - test/e2e/storage/utils/utils.go:582 @ 03/18/23 09:31:05.137 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/storage/utils/utils.go:591 @ 03/18/23 09:31:05.35 STEP: deploying csi-hostpath driver - test/e2e/storage/drivers/csi.go:217 @ 03/18/23 09:31:05.437 Mar 18 09:31:05.621: INFO: creating *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-attacher Mar 18 09:31:05.672: INFO: creating *v1.ClusterRole: external-attacher-runner-read-write-once-pod-7422 Mar 18 09:31:05.672: INFO: Define cluster role external-attacher-runner-read-write-once-pod-7422 Mar 18 09:31:05.717: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-read-write-once-pod-7422 Mar 18 09:31:05.760: INFO: creating *v1.Role: read-write-once-pod-7422-9722/external-attacher-cfg-read-write-once-pod-7422 Mar 18 09:31:05.818: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-attacher-role-cfg Mar 18 09:31:05.867: INFO: creating *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-provisioner Mar 18 09:31:05.915: INFO: creating *v1.ClusterRole: external-provisioner-runner-read-write-once-pod-7422 Mar 18 09:31:05.915: INFO: Define cluster role external-provisioner-runner-read-write-once-pod-7422 Mar 18 09:31:05.962: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-read-write-once-pod-7422 Mar 18 09:31:06.008: INFO: creating *v1.Role: read-write-once-pod-7422-9722/external-provisioner-cfg-read-write-once-pod-7422 Mar 18 09:31:06.054: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-provisioner-role-cfg Mar 18 09:31:06.108: INFO: creating *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-snapshotter Mar 18 09:31:06.161: INFO: creating *v1.ClusterRole: external-snapshotter-runner-read-write-once-pod-7422 Mar 18 09:31:06.161: INFO: Define cluster role external-snapshotter-runner-read-write-once-pod-7422 Mar 18 09:31:06.210: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-read-write-once-pod-7422 Mar 18 09:31:06.263: INFO: creating *v1.Role: read-write-once-pod-7422-9722/external-snapshotter-leaderelection-read-write-once-pod-7422 Mar 18 09:31:06.320: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/external-snapshotter-leaderelection Mar 18 09:31:06.362: INFO: creating *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-external-health-monitor-controller Mar 18 09:31:06.405: INFO: creating *v1.ClusterRole: external-health-monitor-controller-runner-read-write-once-pod-7422 Mar 18 09:31:06.405: INFO: Define cluster role external-health-monitor-controller-runner-read-write-once-pod-7422 Mar 18 09:31:06.450: INFO: creating *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-read-write-once-pod-7422 Mar 18 09:31:06.494: INFO: creating *v1.Role: read-write-once-pod-7422-9722/external-health-monitor-controller-cfg-read-write-once-pod-7422 Mar 18 09:31:06.544: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-external-health-monitor-controller-role-cfg Mar 18 09:31:06.586: INFO: creating *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-resizer Mar 18 09:31:06.629: INFO: creating *v1.ClusterRole: external-resizer-runner-read-write-once-pod-7422 Mar 18 09:31:06.629: INFO: Define cluster role external-resizer-runner-read-write-once-pod-7422 Mar 18 09:31:06.711: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-read-write-once-pod-7422 Mar 18 09:31:06.848: INFO: creating *v1.Role: read-write-once-pod-7422-9722/external-resizer-cfg-read-write-once-pod-7422 Mar 18 09:31:06.897: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-resizer-role-cfg Mar 18 09:31:06.947: INFO: creating *v1.CSIDriver: csi-hostpath-read-write-once-pod-7422 Mar 18 09:31:06.997: INFO: creating *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-hostpathplugin-sa Mar 18 09:31:07.048: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-read-write-once-pod-7422 Mar 18 09:31:07.092: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-read-write-once-pod-7422 Mar 18 09:31:07.142: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-read-write-once-pod-7422 Mar 18 09:31:07.201: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-read-write-once-pod-7422 Mar 18 09:31:07.282: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-read-write-once-pod-7422 Mar 18 09:31:07.333: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-attacher-role Mar 18 09:31:07.377: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-health-monitor-controller-role Mar 18 09:31:07.430: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-provisioner-role Mar 18 09:31:07.481: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-resizer-role Mar 18 09:31:07.524: INFO: creating *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-snapshotter-role Mar 18 09:31:07.567: INFO: creating *v1.StatefulSet: read-write-once-pod-7422-9722/csi-hostpathplugin Mar 18 09:31:07.615: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-read-write-once-pod-7422 < Exit [BeforeEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/testsuites/readwriteoncepod.go:127 @ 03/18/23 09:31:07.67 (2.534s) > Enter [It] should block a second pod from using an in-use ReadWriteOncePod volume on the same node - test/e2e/storage/testsuites/readwriteoncepod.go:188 @ 03/18/23 09:31:07.67 Mar 18 09:31:07.670: INFO: Creating resource for dynamic PV Mar 18 09:31:07.670: INFO: Using claimSize:1Mi, test suite supported size:{ }, driver(csi-hostpath) supported size:{ } STEP: creating a StorageClass read-write-once-pod-7422wmxmn - test/e2e/storage/framework/volume_resource.go:102 @ 03/18/23 09:31:07.67 STEP: creating a claim - test/e2e/storage/framework/volume_resource.go:294 @ 03/18/23 09:31:07.717 Mar 18 09:31:07.717: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 18 09:31:07.762: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath8t99n] to have phase Bound Mar 18 09:31:07.803: INFO: PersistentVolumeClaim csi-hostpath8t99n found but phase is Pending instead of Bound. Mar 18 09:31:09.846: INFO: PersistentVolumeClaim csi-hostpath8t99n found but phase is Pending instead of Bound. Mar 18 09:31:11.888: INFO: PersistentVolumeClaim csi-hostpath8t99n found and phase=Bound (4.126052325s) Automatically polling progress: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod should block a second pod from using an in-use ReadWriteOncePod volume on the same node (Spec Runtime: 5m2.95s) test/e2e/storage/testsuites/readwriteoncepod.go:188 In [It] (Node Runtime: 5m0.001s) test/e2e/storage/testsuites/readwriteoncepod.go:188 At [By Step] creating a claim (Step Runtime: 4m59.954s) test/e2e/storage/framework/volume_resource.go:294 Spec Goroutine goroutine 4440 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x7f957429a858, 0xc0048002d0}, 0xc005181f98, 0x2bc6eca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:205 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7f957429a858, 0xc0048002d0}, 0x18?, 0x18?, 0x72b2630?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7f957429a858, 0xc0048002d0}, 0xc0045ace10?, 0x18?, 0xc004d2c320?) vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:175 k8s.io/kubernetes/test/e2e/framework/events.WaitTimeoutForEvent({0x7f957429a858, 0xc0048002d0}, {0x72b2630?, 0xc003fca680?}, {0xc0045ace10?, 0x0?}, {0xc004d2c320?, 0x0?}, {0x6c48a7d, 0x51}, ...) test/e2e/framework/events/events.go:37 > k8s.io/kubernetes/test/e2e/storage/testsuites.(*readWriteOncePodTestSuite).DefineTests.func5({0x7f957429a858, 0xc0048002d0}) test/e2e/storage/testsuites/readwriteoncepod.go:231 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x727dd60?, 0xc0048002d0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:456 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:863 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:850 Mar 18 09:36:24.719: INFO: Unexpected error: failed to wait for FailedMount event for pod2: <wait.errInterrupted>: timed out waiting for the condition { cause: <*errors.errorString | 0xc000241bb0>{ s: "timed out waiting for the condition", }, } [FAILED] failed to wait for FailedMount event for pod2: timed out waiting for the condition In [It] at: test/e2e/storage/testsuites/readwriteoncepod.go:232 @ 03/18/23 09:36:24.719 < Exit [It] should block a second pod from using an in-use ReadWriteOncePod volume on the same node - test/e2e/storage/testsuites/readwriteoncepod.go:188 @ 03/18/23 09:36:24.719 (5m17.049s) > Enter [AfterEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:36:24.719 Mar 18 09:36:24.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/framework/node/init/init.go:33 @ 03/18/23 09:36:24.902 (183ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/testsuites/readwriteoncepod.go:129 @ 03/18/23 09:36:24.902 Mar 18 09:36:24.902: INFO: Deleting pod pod-9c52b100-d6e7-444a-b48f-2643e3956bf6 Mar 18 09:36:24.902: INFO: Deleting pod "pod-9c52b100-d6e7-444a-b48f-2643e3956bf6" in namespace "read-write-once-pod-7422" Mar 18 09:36:25.036: INFO: Wait up to 5m0s for pod "pod-9c52b100-d6e7-444a-b48f-2643e3956bf6" to be fully deleted Mar 18 09:36:29.343: INFO: Deleting pod pod-eba73012-ac52-4fd8-9433-5e3755c6e150 Mar 18 09:36:29.343: INFO: Deleting pod "pod-eba73012-ac52-4fd8-9433-5e3755c6e150" in namespace "read-write-once-pod-7422" Mar 18 09:36:29.455: INFO: Wait up to 5m0s for pod "pod-eba73012-ac52-4fd8-9433-5e3755c6e150" to be fully deleted Mar 18 09:36:31.582: INFO: Deleting volume csi-hostpath8t99n STEP: Deleting pvc - test/e2e/storage/framework/volume_resource.go:181 @ 03/18/23 09:36:31.582 Mar 18 09:36:31.582: INFO: Deleting PersistentVolumeClaim "csi-hostpath8t99n" Mar 18 09:36:31.633: INFO: Waiting up to 5m0s for PersistentVolume pvc-1d79c3c0-c904-434c-951e-dd0ba8471a90 to get deleted Mar 18 09:36:31.703: INFO: PersistentVolume pvc-1d79c3c0-c904-434c-951e-dd0ba8471a90 found and phase=Bound (70.282965ms) Mar 18 09:36:36.794: INFO: PersistentVolume pvc-1d79c3c0-c904-434c-951e-dd0ba8471a90 was removed STEP: Deleting sc - test/e2e/storage/framework/volume_resource.go:228 @ 03/18/23 09:36:36.794 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/testsuites/readwriteoncepod.go:129 @ 03/18/23 09:36:36.915 (12.013s) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/drivers/csi.go:289 @ 03/18/23 09:36:36.915 STEP: deleting the test namespace: read-write-once-pod-7422 - test/e2e/storage/drivers/csi.go:1015 @ 03/18/23 09:36:36.915 STEP: Collecting events from namespace "read-write-once-pod-7422". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:36:36.915 STEP: Found 6 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:36:37.011 Mar 18 09:36:37.011: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for csi-hostpath8t99n: { } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-read-write-once-pod-7422" or manually created by system administrator Mar 18 09:36:37.011: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9c52b100-d6e7-444a-b48f-2643e3956bf6: { } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-1d79c3c0-c904-434c-951e-dd0ba8471a90" Mar 18 09:36:37.011: INFO: At 2023-03-18 09:31:09 +0000 UTC - event for csi-hostpath8t99n: {csi-hostpath-read-write-once-pod-7422_csi-hostpathplugin-0_9c5aadaf-1f91-45ec-b841-5408b8f643b4 } Provisioning: External provisioner is provisioning volume for claim "read-write-once-pod-7422/csi-hostpath8t99n" Mar 18 09:36:37.011: INFO: At 2023-03-18 09:31:09 +0000 UTC - event for csi-hostpath8t99n: {csi-hostpath-read-write-once-pod-7422_csi-hostpathplugin-0_9c5aadaf-1f91-45ec-b841-5408b8f643b4 } ProvisioningFailed: failed to provision volume with StorageClass "read-write-once-pod-7422wmxmn": error generating accessibility requirements: no available topology found Mar 18 09:36:37.011: INFO: At 2023-03-18 09:31:10 +0000 UTC - event for csi-hostpath8t99n: {csi-hostpath-read-write-once-pod-7422_csi-hostpathplugin-0_9c5aadaf-1f91-45ec-b841-5408b8f643b4 } ProvisioningSucceeded: Successfully provisioned volume pvc-1d79c3c0-c904-434c-951e-dd0ba8471a90 Mar 18 09:36:37.011: INFO: At 2023-03-18 09:31:12 +0000 UTC - event for pod-9c52b100-d6e7-444a-b48f-2643e3956bf6: {default-scheduler } Scheduled: Successfully assigned read-write-once-pod-7422/pod-9c52b100-d6e7-444a-b48f-2643e3956bf6 to e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:37.065: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:36:37.065: INFO: Mar 18 09:36:37.177: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:36:37.280: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 31205 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:36:37.281: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:36:37.379: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:36:37.605: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:36:37.605: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:36:37.605: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:36:37.605: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:36:37.605: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:37.605: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:37.605: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:37.605: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:36:37.605: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:36:37.605: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:36:37.605: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:37.605: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:36:37.939: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:36:37.939: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:37.993: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 42639 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-8337":"e2e-9e86028ad1-674b9-minion-group-6qbb","csi-mock-csi-mock-volumes-expansion-4044":"e2e-9e86028ad1-674b9-minion-group-6qbb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:36:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:36:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:29 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:29 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:29 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:29 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-expansion-4044^33a907de-c570-11ed-ab04-f6602f593d98],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-expansion-4044^33a907de-c570-11ed-ab04-f6602f593d98,DevicePath:,},},Config:nil,},} Mar 18 09:36:37.993: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:38.064: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:38.196: INFO: webserver-pod started at 2023-03-18 09:36:14 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:38.196: INFO: webserver-deployment-67bd4bf6dc-nhhj7 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:38.196: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:38.196: INFO: webserver-deployment-67bd4bf6dc-j4vgs started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:38.196: INFO: frontend-5b6f6d589f-snsrv started at 2023-03-18 09:36:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container guestbook-frontend ready: true, restart count 0 Mar 18 09:36:38.196: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:38.196: INFO: pod-subpath-test-inlinevolume-zx84 started at 2023-03-18 09:36:34 +0000 UTC (2+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Init container init-volume-inlinevolume-zx84 ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Init container test-init-volume-inlinevolume-zx84 ready: false, restart count 0 Mar 18 09:36:38.196: INFO: Container test-container-subpath-inlinevolume-zx84 ready: false, restart count 0 Mar 18 09:36:38.196: INFO: external-provisioner-lgz4q started at 2023-03-18 09:36:13 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container nfs-provisioner ready: false, restart count 0 Mar 18 09:36:38.196: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:38.196: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:38.196: INFO: pod1 started at 2023-03-18 09:36:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:38.196: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:15 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:38.196: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:38.196: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:38.196: INFO: liveness-6e42f777-5053-4740-856d-77370ed5796a started at 2023-03-18 09:31:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:38.196: INFO: execpodlnjr5 started at 2023-03-18 09:36:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:38.196: INFO: pod-terminate-status-2-13 started at 2023-03-18 09:36:36 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Init container fail ready: false, restart count 0 Mar 18 09:36:38.196: INFO: Container blocked ready: false, restart count 0 Mar 18 09:36:38.196: INFO: webserver-deployment-67bd4bf6dc-gssnm started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:38.196: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:38.196: INFO: service-headless-toggled-666m5 started at 2023-03-18 09:35:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 18 09:36:38.196: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:34:45 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:38.196: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:38.196: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:38.196: INFO: webserver-deployment-67bd4bf6dc-sr8xr started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:38.196: INFO: busybox-07255980-96b4-4e2e-af19-fb342b60f84d started at 2023-03-18 09:33:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:38.196: INFO: agnhost-replica-dc6f7f69c-g2c9t started at 2023-03-18 09:36:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container replica ready: true, restart count 0 Mar 18 09:36:38.196: INFO: netserver-0 started at 2023-03-18 09:34:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:38.196: INFO: service-headless-f7r95 started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container service-headless ready: true, restart count 0 Mar 18 09:36:38.196: INFO: pod-ephm-test-secret-tl6g started at 2023-03-18 09:35:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container test-container-subpath-secret-tl6g ready: false, restart count 0 Mar 18 09:36:38.196: INFO: csi-mockplugin-resizer-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:38.196: INFO: external-provisioner-4ft2z started at 2023-03-18 09:35:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container nfs-provisioner ready: true, restart count 0 Mar 18 09:36:38.196: INFO: netserver-0 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:38.196: INFO: Container webserver ready: false, restart count 0 Mar 18 09:36:38.845: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:38.845: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:38.926: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 42532 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9328":"e2e-9e86028ad1-674b9-minion-group-l6p2","csi-hostpath-provisioning-2882":"e2e-9e86028ad1-674b9-minion-group-l6p2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:36:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9328^5033a820-c570-11ed-9318-269e8ba8d779 kubernetes.io/csi/csi-hostpath-provisioning-2882^59d2be7e-c570-11ed-abe0-ce225837d139],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9328^5033a820-c570-11ed-9318-269e8ba8d779,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2882^59d2be7e-c570-11ed-abe0-ce225837d139,DevicePath:,},},Config:nil,},} Mar 18 09:36:38.927: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:38.998: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:39.127: INFO: webserver-deployment-67bd4bf6dc-vw52m started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:39.127: INFO: host-test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:39.127: INFO: pause-pod-1 started at 2023-03-18 09:36:20 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:39.127: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:39.127: INFO: liveness-5bb71f0a-1103-443e-978a-d66becc64152 started at 2023-03-18 09:36:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:39.127: INFO: probe-test-6dfebd7e-5219-4c60-9c42-d57cbebf15ca started at 2023-03-18 09:36:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container probe-test-6dfebd7e-5219-4c60-9c42-d57cbebf15ca ready: false, restart count 0 Mar 18 09:36:39.127: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container coredns ready: true, restart count 0 Mar 18 09:36:39.127: INFO: pod-terminate-status-0-13 started at 2023-03-18 09:36:37 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Init container fail ready: false, restart count 0 Mar 18 09:36:39.127: INFO: Container blocked ready: false, restart count 0 Mar 18 09:36:39.127: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:39.127: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:39.127: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:39.127: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:39.127: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:06 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:39.127: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:39.127: INFO: test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:39.127: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:22 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:39.127: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:39.127: INFO: webserver-deployment-67bd4bf6dc-wp7k8 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:39.127: INFO: netserver-1 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container webserver ready: false, restart count 0 Mar 18 09:36:39.127: INFO: webserver-deployment-67bd4bf6dc-pv5t4 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:39.127: INFO: netserver-1 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:39.127: INFO: frontend-5b6f6d589f-prwrp started at 2023-03-18 09:36:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container guestbook-frontend ready: true, restart count 0 Mar 18 09:36:39.127: INFO: pod-subpath-test-dynamicpv-z9lf started at 2023-03-18 09:36:26 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Init container init-volume-dynamicpv-z9lf ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container test-container-subpath-dynamicpv-z9lf ready: true, restart count 0 Mar 18 09:36:39.127: INFO: inline-volume-tester-cpczh started at 2023-03-18 09:36:09 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:36:39.127: INFO: service-headless-toggled-2l42t started at 2023-03-18 09:35:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 18 09:36:39.127: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:39.127: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:36:39.127: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:36:39.127: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-l6p2-2dzpt started at 2023-03-18 09:36:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:39.127: INFO: rs-spjn5 started at 2023-03-18 09:36:18 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container donothing ready: false, restart count 0 Mar 18 09:36:39.127: INFO: service-headless-d5crn started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.127: INFO: Container service-headless ready: true, restart count 0 Mar 18 09:36:39.516: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:39.516: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:39.581: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 43263 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-read-write-once-pod-7422":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-03-18 09:36:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-03-18 09:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-18 09:36:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:36:39.582: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:39.674: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:39.826: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:39.826: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:39.826: INFO: webserver-deployment-67bd4bf6dc-dtwlx started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:39.826: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:39.826: INFO: frontend-5b6f6d589f-rgqvr started at 2023-03-18 09:36:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container guestbook-frontend ready: true, restart count 0 Mar 18 09:36:39.826: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:39.826: INFO: webserver-deployment-67bd4bf6dc-xd6x9 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:39.826: INFO: agnhost-primary-779fbc64d9-q8w68 started at 2023-03-18 09:36:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container primary ready: true, restart count 0 Mar 18 09:36:39.826: INFO: exceed-active-deadline-7kv6n started at 2023-03-18 09:36:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container c ready: true, restart count 0 Mar 18 09:36:39.826: INFO: netserver-2 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:39.826: INFO: webserver-deployment-67bd4bf6dc-ltkd2 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:39.826: INFO: exceed-active-deadline-f8gmg started at 2023-03-18 09:36:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container c ready: true, restart count 0 Mar 18 09:36:39.826: INFO: explicit-root-uid started at 2023-03-18 09:36:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container explicit-root-uid ready: false, restart count 0 Mar 18 09:36:39.826: INFO: service-headless-toggled-4hm5z started at 2023-03-18 09:35:52 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 18 09:36:39.826: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:39.826: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:25 +0000 UTC (0+4 container statuses recorded) Mar 18 09:36:39.826: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:39.826: INFO: pod-terminate-status-2-14 started at <nil> (0+0 container statuses recorded) Mar 18 09:36:39.826: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container coredns ready: true, restart count 0 Mar 18 09:36:39.826: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:36:39.826: INFO: netserver-2 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container webserver ready: false, restart count 0 Mar 18 09:36:39.826: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:36:39.826: INFO: agnhost-replica-dc6f7f69c-5886h started at 2023-03-18 09:36:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container replica ready: true, restart count 0 Mar 18 09:36:39.826: INFO: downward-api-1bb0ac09-e91e-4284-aae6-c9bf22117740 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container dapi-container ready: false, restart count 0 Mar 18 09:36:39.826: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:36:39.826: INFO: service-headless-6zb9p started at 2023-03-18 09:35:37 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container service-headless ready: true, restart count 0 Mar 18 09:36:39.826: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:31:07 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:39.826: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:39.826: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:39.826: INFO: hostexec-e2e-9e86028ad1-674b9-minion-group-s3x0-pzq8b started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:39.826: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:40.593: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 STEP: Waiting for namespaces [read-write-once-pod-7422] to vanish - test/e2e/framework/util.go:241 @ 03/18/23 09:36:40.676 STEP: uninstalling csi csi-hostpath driver - test/e2e/storage/drivers/csi.go:1020 @ 03/18/23 09:36:46.759 STEP: deleting the driver namespace: read-write-once-pod-7422-9722 - test/e2e/storage/drivers/csi.go:1023 @ 03/18/23 09:36:46.759 STEP: Collecting events from namespace "read-write-once-pod-7422-9722". - test/e2e/framework/debug/dump.go:42 @ 03/18/23 09:36:46.759 STEP: Found 2 events. - test/e2e/framework/debug/dump.go:46 @ 03/18/23 09:36:46.807 Mar 18 09:36:46.807: INFO: At 2023-03-18 09:31:07 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Mar 18 09:36:46.807: INFO: At 2023-03-18 09:31:07 +0000 UTC - event for csi-hostpathplugin-0: {default-scheduler } Scheduled: Successfully assigned read-write-once-pod-7422-9722/csi-hostpathplugin-0 to e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:46.850: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 09:36:46.850: INFO: csi-hostpathplugin-0 e2e-9e86028ad1-674b9-minion-group-s3x0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:31:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:31:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:31:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-18 09:31:07 +0000 UTC }] Mar 18 09:36:46.850: INFO: Mar 18 09:36:47.352: INFO: Logging node info for node e2e-9e86028ad1-674b9-master Mar 18 09:36:47.400: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-master 267e5023-e569-49ff-9163-80ff52b2e553 31205 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-03-18 09:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20617822208 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3848937472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18556039957 0} {<nil>} 18556039957 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3586793472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:33:04 +0000 UTC,LastTransitionTime:2023-03-18 09:22:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.71.20,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-master.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:42d0f710c860b2bd41ccdaf5ca173acd,SystemUUID:42d0f710-c860-b2bd-41cc-daf5ca173acd,BootID:16a230c9-f4cf-4c24-8530-18ce9126f638,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:121906531,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:113849341,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:59679728,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:36:47.400: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-master Mar 18 09:36:47.450: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-master Mar 18 09:36:47.514: INFO: konnectivity-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container konnectivity-server-container ready: true, restart count 0 Mar 18 09:36:47.514: INFO: kube-controller-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 18 09:36:47.514: INFO: kube-scheduler-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container kube-scheduler ready: true, restart count 0 Mar 18 09:36:47.514: INFO: etcd-server-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:36:47.514: INFO: kube-apiserver-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container kube-apiserver ready: true, restart count 0 Mar 18 09:36:47.514: INFO: kube-addon-manager-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:06 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container kube-addon-manager ready: true, restart count 0 Mar 18 09:36:47.514: INFO: l7-lb-controller-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:22:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container l7-lb-controller ready: true, restart count 2 Mar 18 09:36:47.514: INFO: metadata-proxy-v0.1-nnl66 started at 2023-03-18 09:22:38 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:47.514: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:47.514: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:47.514: INFO: etcd-server-events-e2e-9e86028ad1-674b9-master started at 2023-03-18 09:21:50 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.514: INFO: Container etcd-container ready: true, restart count 0 Mar 18 09:36:47.727: INFO: Latency metrics for node e2e-9e86028ad1-674b9-master Mar 18 09:36:47.727: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:47.771: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-6qbb 8c1a6f16-41d1-4196-bf43-efe27f8d8a66 43747 0 2023-03-18 09:22:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-6qbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-6qbb topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9073":"e2e-9e86028ad1-674b9-minion-group-6qbb","csi-mock-csi-mock-volumes-expansion-4044":"e2e-9e86028ad1-674b9-minion-group-6qbb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-18 09:22:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-03-18 09:36:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:36:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-6qbb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:31:15 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:16 +0000 UTC,LastTransitionTime:2023-03-18 09:22:52 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:35.197.71.72,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-6qbb.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8effcb9d241b10eacb89037ecd76b44c,SystemUUID:8effcb9d-241b-10ea-cb89-037ecd76b44c,BootID:8ef64925-c34b-4653-8945-c2170edc6327,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:13ac9b3c476d7290a4451d65d09d6016a2cb89836ffbfa4eae55b72731a22080 registry.k8s.io/build-image/distroless-iptables:v0.2.2],SizeBytes:7729580,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9073^64d0bc27-c570-11ed-9b77-8aea266b63a5,DevicePath:,},},Config:nil,},} Mar 18 09:36:47.772: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:47.816: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:47.925: INFO: execpodlnjr5 started at 2023-03-18 09:36:33 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:47.925: INFO: konnectivity-agent-zpvjh started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-7b75d79cf5-fgc8w started at 2023-03-18 09:36:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:47.925: INFO: busybox-07255980-96b4-4e2e-af19-fb342b60f84d started at 2023-03-18 09:33:47 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:47.925: INFO: agnhost-replica-dc6f7f69c-g2c9t started at 2023-03-18 09:36:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container replica ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-8mrlw started at 2023-03-18 09:36:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: netserver-0 started at 2023-03-18 09:34:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-7b75d79cf5-qpxs5 started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:47.925: INFO: pod-ephm-test-secret-tl6g started at 2023-03-18 09:35:59 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container test-container-subpath-secret-tl6g ready: false, restart count 0 Mar 18 09:36:47.925: INFO: csi-mockplugin-resizer-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-hfmhq started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: netserver-0 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container webserver ready: false, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-nhhj7 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-7b75d79cf5-pj2mj started at 2023-03-18 09:36:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:47.925: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:41 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:47.925: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:47.925: INFO: npd-v0.8.9-fhnmg started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-j4vgs started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-bjmwp started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-zh5lm started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-7b75d79cf5-zwhhr started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:47.925: INFO: frontend-5b6f6d589f-snsrv started at 2023-03-18 09:36:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container guestbook-frontend ready: true, restart count 0 Mar 18 09:36:47.925: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-6qbb started at 2023-03-18 09:22:32 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-7b75d79cf5-mgbp2 started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:47.925: INFO: inline-volume-tester-2xdh6 started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container csi-volume-tester ready: false, restart count 0 Mar 18 09:36:47.925: INFO: metadata-proxy-v0.1-tpg7b started at 2023-03-18 09:22:32 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:47.925: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:47.925: INFO: pod1 started at 2023-03-18 09:36:31 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:47.925: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:15 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:47.925: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:47.925: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:47.925: INFO: csi-mockplugin-attacher-0 started at 2023-03-18 09:35:15 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:47.925: INFO: webserver-deployment-67bd4bf6dc-45mwc started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:47.925: INFO: liveness-6e42f777-5053-4740-856d-77370ed5796a started at 2023-03-18 09:31:03 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:47.925: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:48.241: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-6qbb Mar 18 09:36:48.241: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:48.283: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-l6p2 598ace61-5854-4ce4-a4e9-6965d58f0e1a 42532 0 2023-03-18 09:22:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-l6p2 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-l6p2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9328":"e2e-9e86028ad1-674b9-minion-group-l6p2","csi-hostpath-provisioning-2882":"e2e-9e86028ad1-674b9-minion-group-l6p2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-03-18 09:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-03-18 09:36:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2023-03-18 09:36:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-l6p2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:31:17 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:19 +0000 UTC,LastTransitionTime:2023-03-18 09:22:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:49 +0000 UTC,LastTransitionTime:2023-03-18 09:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:26 +0000 UTC,LastTransitionTime:2023-03-18 09:22:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:35.230.36.22,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-l6p2.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c65da20697a4bfad03dfea306c4caca3,SystemUUID:c65da206-97a4-bfad-03df-ea306c4caca3,BootID:bab97861-0225-4291-912a-eb1db18f8ad7,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/debian-base@sha256:ebda8587ec0f49eb88ee3a608ef018484908cbc5aa32556a0d78356088c185d4 registry.k8s.io/debian-base:v2.0.0],SizeBytes:21093264,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9328^5033a820-c570-11ed-9318-269e8ba8d779 kubernetes.io/csi/csi-hostpath-provisioning-2882^59d2be7e-c570-11ed-abe0-ce225837d139],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9328^5033a820-c570-11ed-9318-269e8ba8d779,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2882^59d2be7e-c570-11ed-abe0-ce225837d139,DevicePath:,},},Config:nil,},} Mar 18 09:36:48.284: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:48.328: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:48.411: INFO: rs-spjn5 started at 2023-03-18 09:36:18 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container donothing ready: false, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-vw52m started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-7b75d79cf5-869zz started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-7b75d79cf5-qlkcx started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.411: INFO: konnectivity-agent-879m7 started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:48.411: INFO: host-test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:48.411: INFO: pause-pod-1 started at 2023-03-18 09:36:20 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:48.411: INFO: metadata-proxy-v0.1-424sh started at 2023-03-18 09:22:36 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:48.411: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:48.411: INFO: liveness-5bb71f0a-1103-443e-978a-d66becc64152 started at 2023-03-18 09:36:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container agnhost-container ready: true, restart count 0 Mar 18 09:36:48.411: INFO: probe-test-6dfebd7e-5219-4c60-9c42-d57cbebf15ca started at 2023-03-18 09:36:22 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container probe-test-6dfebd7e-5219-4c60-9c42-d57cbebf15ca ready: false, restart count 0 Mar 18 09:36:48.411: INFO: coredns-8f5847b64-8mvxr started at 2023-03-18 09:27:04 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container coredns ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-96f8r started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.411: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-l6p2 started at 2023-03-18 09:22:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:48.411: INFO: npd-v0.8.9-zdpdp started at 2023-03-18 09:22:49 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-7b75d79cf5-47gcd started at 2023-03-18 09:36:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.411: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:06 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:48.411: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:48.411: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:36:22 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:48.411: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:48.411: INFO: test-container-pod started at 2023-03-18 09:34:45 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:48.411: INFO: netserver-1 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-wp7k8 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-pv5t4 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.411: INFO: netserver-1 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:48.411: INFO: frontend-5b6f6d589f-prwrp started at 2023-03-18 09:36:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container guestbook-frontend ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-nz9rt started at 2023-03-18 09:36:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.411: INFO: pod-subpath-test-dynamicpv-z9lf started at 2023-03-18 09:36:26 +0000 UTC (1+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Init container init-volume-dynamicpv-z9lf ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container test-container-subpath-dynamicpv-z9lf ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-dgtz6 started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.411: INFO: metrics-server-v0.5.2-57d47cbf5-gtdjb started at 2023-03-18 09:22:55 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:48.411: INFO: Container metrics-server ready: true, restart count 0 Mar 18 09:36:48.411: INFO: Container metrics-server-nanny ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-7b75d79cf5-kgvx2 started at 2023-03-18 09:36:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.411: INFO: inline-volume-tester-cpczh started at 2023-03-18 09:36:09 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container csi-volume-tester ready: true, restart count 0 Mar 18 09:36:48.411: INFO: webserver-deployment-67bd4bf6dc-h2zwn started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.411: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.762: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-l6p2 Mar 18 09:36:48.762: INFO: Logging node info for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:48.808: INFO: Node Info: &Node{ObjectMeta:{e2e-9e86028ad1-674b9-minion-group-s3x0 4bd190bd-b287-42b8-bf6e-86b0dfbbe357 43263 0 2023-03-18 09:22:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-9e86028ad1-674b9-minion-group-s3x0 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:e2e-9e86028ad1-674b9-minion-group-s3x0 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-read-write-once-pod-7422":"e2e-9e86028ad1-674b9-minion-group-s3x0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-18 09:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-18 09:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-03-18 09:36:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-03-18 09:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-18 09:36:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-013/us-west1-b/e2e-9e86028ad1-674b9-minion-group-s3x0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{103865303040 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7806021632 0} {<nil>} 7623068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{93478772582 0} {<nil>} 93478772582 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7543877632 0} {<nil>} 7367068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:True,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:31:19 +0000 UTC,Reason:DockerHung,Message:kernel: INFO: task docker:12345 blocked for more than 120 seconds.,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-03-18 09:36:21 +0000 UTC,LastTransitionTime:2023-03-18 09:22:47 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-18 09:22:39 +0000 UTC,LastTransitionTime:2023-03-18 09:22:39 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-18 09:36:38 +0000 UTC,LastTransitionTime:2023-03-18 09:22:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.75.196,},NodeAddress{Type:InternalDNS,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},NodeAddress{Type:Hostname,Address:e2e-9e86028ad1-674b9-minion-group-s3x0.c.k8s-infra-e2e-boskos-013.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f48035c3fd56450624fe69b9577c2359,SystemUUID:f48035c3-fd56-4506-24fe-69b9577c2359,BootID:f7e24489-9538-4f10-8ca6-393d7dcc2190,KernelVersion:5.15.0-1013-gcp,OSImage:Ubuntu 22.04 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.27.0-beta.0.24+d1921ebdb322e0,KubeProxyVersion:v1.27.0-beta.0.24+d1921ebdb322e0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-beta.0.24_d1921ebdb322e0],SizeBytes:72680802,},ContainerImage{Names:[registry.k8s.io/node-problem-detector/node-problem-detector@sha256:c4a75e50c3ee30daa78b7149de781f66236885850b1ea7b0c1a062af5019e019 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.9],SizeBytes:56740357,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:e468dddcd275163a042ab297b2d8c2aca50d5e148d2d22f3b6ba119e2f31fa79 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0],SizeBytes:27427836,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[gcr.io/k8s-authenticated-test/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/k8s-authenticated-test/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:9b2d9abda017c28b12f84a344f57cd73fbdb6c2bd7dd5bdd5018246ad1093ba6 registry.k8s.io/sig-storage/hostpathplugin:v1.11.0],SizeBytes:18233005,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns:v1.10.1],SizeBytes:16190758,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/apparmor-loader@sha256:c2ba46d9cf4549528f80d4850630b712372715e0c556d35d5c3016144365d882 registry.k8s.io/e2e-test-images/apparmor-loader:1.4],SizeBytes:8695007,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 18 09:36:48.808: INFO: Logging kubelet events for node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:48.857: INFO: Logging pods the kubelet thinks is on node e2e-9e86028ad1-674b9-minion-group-s3x0 Mar 18 09:36:48.983: INFO: explicit-root-uid started at 2023-03-18 09:36:07 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container explicit-root-uid ready: false, restart count 0 Mar 18 09:36:48.983: INFO: npd-v0.8.9-xsl94 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container node-problem-detector ready: true, restart count 0 Mar 18 09:36:48.983: INFO: csi-mockplugin-0 started at 2023-03-18 09:35:25 +0000 UTC (0+4 container statuses recorded) Mar 18 09:36:48.983: INFO: Container busybox ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container driver-registrar ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container mock ready: true, restart count 0 Mar 18 09:36:48.983: INFO: coredns-8f5847b64-6lvkh started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container coredns ready: true, restart count 0 Mar 18 09:36:48.983: INFO: l7-default-backend-856d874f49-wj97r started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container default-http-backend ready: true, restart count 0 Mar 18 09:36:48.983: INFO: netserver-2 started at 2023-03-18 09:36:28 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container webserver ready: false, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-7b75d79cf5-c26fg started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-67bd4bf6dc-626ww started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.983: INFO: volume-snapshot-controller-0 started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container volume-snapshot-controller ready: true, restart count 0 Mar 18 09:36:48.983: INFO: agnhost-replica-dc6f7f69c-5886h started at 2023-03-18 09:36:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container replica ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-67bd4bf6dc-9w2zd started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.983: INFO: kube-dns-autoscaler-7b444c59c9-bfphp started at 2023-03-18 09:27:27 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container autoscaler ready: true, restart count 0 Mar 18 09:36:48.983: INFO: csi-hostpathplugin-0 started at 2023-03-18 09:31:07 +0000 UTC (0+7 container statuses recorded) Mar 18 09:36:48.983: INFO: Container csi-attacher ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container csi-provisioner ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container csi-resizer ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container csi-snapshotter ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container hostpath ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container liveness-probe ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container node-driver-registrar ready: true, restart count 0 Mar 18 09:36:48.983: INFO: metadata-proxy-v0.1-5k8j4 started at 2023-03-18 09:22:30 +0000 UTC (0+2 container statuses recorded) Mar 18 09:36:48.983: INFO: Container metadata-proxy ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-67bd4bf6dc-dtwlx started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-7b75d79cf5-h74l8 started at 2023-03-18 09:36:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.983: INFO: kube-proxy-e2e-9e86028ad1-674b9-minion-group-s3x0 started at 2023-03-18 09:22:30 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 09:36:48.983: INFO: frontend-5b6f6d589f-rgqvr started at 2023-03-18 09:36:34 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container guestbook-frontend ready: true, restart count 0 Mar 18 09:36:48.983: INFO: dns-test-b2664382-6712-4e98-88a1-df79367c4597 started at 2023-03-18 09:36:41 +0000 UTC (0+3 container statuses recorded) Mar 18 09:36:48.983: INFO: Container jessie-querier ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container querier ready: true, restart count 0 Mar 18 09:36:48.983: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:48.983: INFO: konnectivity-agent-hv8gl started at 2023-03-18 09:22:40 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container konnectivity-agent ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-67bd4bf6dc-xd6x9 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.983: INFO: agnhost-primary-779fbc64d9-q8w68 started at 2023-03-18 09:36:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container primary ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-7b75d79cf5-xr2zv started at 2023-03-18 09:36:44 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:48.983: INFO: exceed-active-deadline-7kv6n started at 2023-03-18 09:36:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container c ready: true, restart count 0 Mar 18 09:36:48.983: INFO: netserver-2 started at 2023-03-18 09:34:35 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container webserver ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-67bd4bf6dc-7knhd started at 2023-03-18 09:36:43 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-67bd4bf6dc-ltkd2 started at 2023-03-18 09:36:36 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: true, restart count 0 Mar 18 09:36:48.983: INFO: exceed-active-deadline-f8gmg started at 2023-03-18 09:36:29 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container c ready: true, restart count 0 Mar 18 09:36:48.983: INFO: webserver-deployment-7b75d79cf5-rr7ld started at 2023-03-18 09:36:41 +0000 UTC (0+1 container statuses recorded) Mar 18 09:36:48.983: INFO: Container httpd ready: false, restart count 0 Mar 18 09:36:49.348: INFO: Latency metrics for node e2e-9e86028ad1-674b9-minion-group-s3x0 STEP: Waiting for namespaces [read-write-once-pod-7422-9722] to vanish - test/e2e/framework/util.go:241 @ 03/18/23 09:36:49.404 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/storage/drivers/csi.go:289 @ 03/18/23 09:37:01.489 (24.574s) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:01.489 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:01.533 (44ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.StatefulSet: read-write-once-pod-7422-9722/csi-hostpathplugin | create.go:156 @ 03/18/23 09:37:01.533 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.StatefulSet: read-write-once-pod-7422-9722/csi-hostpathplugin | create.go:156 @ 03/18/23 09:37:01.58 (47ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-snapshotter-role | create.go:156 @ 03/18/23 09:37:01.58 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-snapshotter-role | create.go:156 @ 03/18/23 09:37:01.623 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-resizer-role | create.go:156 @ 03/18/23 09:37:01.623 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-resizer-role | create.go:156 @ 03/18/23 09:37:01.693 (70ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-provisioner-role | create.go:156 @ 03/18/23 09:37:01.693 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-provisioner-role | create.go:156 @ 03/18/23 09:37:01.738 (45ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-health-monitor-controller-role | create.go:156 @ 03/18/23 09:37:01.738 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-health-monitor-controller-role | create.go:156 @ 03/18/23 09:37:01.788 (50ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-attacher-role | create.go:156 @ 03/18/23 09:37:01.788 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-hostpathplugin-attacher-role | create.go:156 @ 03/18/23 09:37:01.862 (74ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:01.862 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:01.934 (72ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:01.934 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.008 (74ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.008 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.063 (55ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.063 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.12 (57ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.12 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.168 (48ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-hostpathplugin-sa | create.go:156 @ 03/18/23 09:37:02.168 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-hostpathplugin-sa | create.go:156 @ 03/18/23 09:37:02.211 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.CSIDriver: csi-hostpath-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.211 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.CSIDriver: csi-hostpath-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.274 (63ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-resizer-role-cfg | create.go:156 @ 03/18/23 09:37:02.274 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-resizer-role-cfg | create.go:156 @ 03/18/23 09:37:02.323 (49ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-resizer-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.323 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-resizer-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.367 (44ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-resizer-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.368 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-resizer-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.425 (58ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-resizer-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.425 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-resizer-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.476 (51ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-resizer | create.go:156 @ 03/18/23 09:37:02.476 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-resizer | create.go:156 @ 03/18/23 09:37:02.525 (49ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-external-health-monitor-controller-role-cfg | create.go:156 @ 03/18/23 09:37:02.525 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-external-health-monitor-controller-role-cfg | create.go:156 @ 03/18/23 09:37:02.567 (42ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-health-monitor-controller-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.567 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-health-monitor-controller-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.61 (42ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.61 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.663 (53ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-health-monitor-controller-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.663 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-health-monitor-controller-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.72 (57ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-external-health-monitor-controller | create.go:156 @ 03/18/23 09:37:02.72 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-external-health-monitor-controller | create.go:156 @ 03/18/23 09:37:02.764 (44ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/external-snapshotter-leaderelection | create.go:156 @ 03/18/23 09:37:02.764 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/external-snapshotter-leaderelection | create.go:156 @ 03/18/23 09:37:02.809 (45ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-snapshotter-leaderelection-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.809 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-snapshotter-leaderelection-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.862 (52ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-snapshotter-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.862 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-snapshotter-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.911 (49ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-snapshotter-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.911 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-snapshotter-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:02.956 (45ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-snapshotter | create.go:156 @ 03/18/23 09:37:02.956 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-snapshotter | create.go:156 @ 03/18/23 09:37:03.011 (55ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-provisioner-role-cfg | create.go:156 @ 03/18/23 09:37:03.011 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-provisioner-role-cfg | create.go:156 @ 03/18/23 09:37:03.053 (42ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-provisioner-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.053 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-provisioner-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.096 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-provisioner-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.096 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-provisioner-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.142 (46ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-provisioner-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.142 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-provisioner-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.185 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-provisioner | create.go:156 @ 03/18/23 09:37:03.185 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-provisioner | create.go:156 @ 03/18/23 09:37:03.227 (42ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-attacher-role-cfg | create.go:156 @ 03/18/23 09:37:03.227 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.RoleBinding: read-write-once-pod-7422-9722/csi-attacher-role-cfg | create.go:156 @ 03/18/23 09:37:03.271 (44ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-attacher-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.271 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.Role: read-write-once-pod-7422-9722/external-attacher-cfg-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.313 (42ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-attacher-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.313 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRoleBinding: csi-attacher-role-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.356 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-attacher-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.356 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ClusterRole: external-attacher-runner-read-write-once-pod-7422 | create.go:156 @ 03/18/23 09:37:03.399 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-attacher | create.go:156 @ 03/18/23 09:37:03.399 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - deleting *v1.ServiceAccount: read-write-once-pod-7422-9722/csi-attacher | create.go:156 @ 03/18/23 09:37:03.442 (43ms) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:37:03.442 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - test/e2e/framework/metrics/init/init.go:35 @ 03/18/23 09:37:03.442 (0s) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - dump namespaces | framework.go:209 @ 03/18/23 09:37:03.442 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:37:03.442 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/18/23 09:37:03.442 (0s) < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - dump namespaces | framework.go:209 @ 03/18/23 09:37:03.442 (0s) > Enter [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - tear down framework | framework.go:206 @ 03/18/23 09:37:03.442 < Exit [DeferCleanup (Each)] [Testpattern: Dynamic PV (default fs)] read-write-once-pod - tear down framework | framework.go:206 @ 03/18/23 09:37:03.442 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:37:03.442 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/18/23 09:37:03.442 (0s)
Filter through log files | View test history on testgrid
error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Driver:.gcepd\]|\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply an invalid CR with extra properties for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown and duplicate fields of a typed object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields of a typed object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] OpenAPIV3 should contain OpenAPI V3 for Aggregated APIServer
Kubernetes e2e suite [It] [sig-api-machinery] OpenAPIV3 should publish OpenAPI V3 for CustomResourceDefinition
Kubernetes e2e suite [It] [sig-api-machinery] OpenAPIV3 should round trip OpenAPI V3 for all built-in group versions
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl prune with applyset should apply and prune objects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl subresource flag GET on status subresource of built-in type (node) returns identical info as GET on the built-in type
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl subresource flag should not be used in a bulk GET
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cli] Kubectl logs default container logs the second container is the default-container by annotation should log default container if not specified
Kubernetes e2e suite [It] [sig-cli] Kubectl logs logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics slis from API server.
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] Connectivity Pod Lifecycle should be able to connect from a Pod to a terminating Pod
Kubernetes e2e suite [It] [sig-network] Connectivity Pod Lifecycle should be able to connect to other Pod from a terminating Pod
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support a Service with multiple endpoint IPs specified in multiple EndpointSlices
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support a Service with multiple ports specified in multiple EndpointSlices
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoint with multiple subsets and same IP address
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocols
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod TerminationGracePeriodSeconds is negative pod with negative grace period
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should not have staging_path missing in node expand volume pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion with secret should expand volume without restarting pod if attach=on, nodeExpansion=on, csiNodeExpandSecret=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod should preempt lower priority pods using ReadWriteOncePod volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [It] [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [It] [sig-storage] Volumes NFSv3 should be mountable for NFSv3
Kubernetes e2e suite [It] [sig-storage] Volumes NFSv4 should be mountable for NFSv4
Kubernetes e2e suite [ReportAfterSuite] Ginkgo JSON report
Kubernetes e2e suite [ReportAfterSuite] JUnit XML report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest Build
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest Stage
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Automatically recreate PVC for pending pod when PVC is missing PVC should be recreated when pod is pending due to missing PVC [Disruptive][Serial]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy should delete PVCs after adopting pod (WhenScaled)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Scaling StatefulSetStartOrdinal [Feature:StatefulSetStartOrdinal] Decreasing .start.ordinal
Kubernetes e2e suite [It] [sig-apps] StatefulSet Scaling StatefulSetStartOrdinal [Feature:StatefulSetStartOrdinal] Increasing .start.ordinal
Kubernetes e2e suite [It] [sig-apps] StatefulSet Scaling StatefulSetStartOrdinal [Feature:StatefulSetStartOrdinal] Removing .start.ordinal
Kubernetes e2e suite [It] [sig-apps] StatefulSet Scaling StatefulSetStartOrdinal [Feature:StatefulSetStartOrdinal] Setting .start.ordinal
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] testing SSR in different API groups authentication/v1alpha1
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] testing SSR in different API groups authentication/v1beta1
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Cluster [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Local [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Serial]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should allow creating a Pod with an SCTP HostPort [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should allow creating a basic SCTP service with pod and endpoints [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive]