Recent runs || View in Spyglass
PR | jonathanrainer: Adds capability to provision directories on the EFS dynamically |
Result | ABORTED |
Tests | 1 failed / 36 succeeded |
Started | |
Elapsed | 22m11s |
Revision | d1dd6a245af8b799b37eceebe945f52dcf6b4666 |
Refs |
732 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EFS\sCSI\sSuite\s\[efs\-csi\]\sEFS\sCSI\s\[Driver\:\sefs\.csi\.aws\.com\]\sshould\screate\sa\sdirectory\swith\sthe\scorrect\spermissions\swhen\sin\sdirectory\sprovisioning\smode$'
/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:398 Checking File Permissions of mounted folder Expected <string>: 755 to equal <string>: 777 /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:426from junit_01.xml
[BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Feb 4 15:07:39.875: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename efs �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a directory with the correct permissions when in directory provisioning mode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:398 Feb 4 15:07:40.284: INFO: Created StorageClass efs-3612p6srt Feb 4 15:07:40.355: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [directory-pvc-1] to have phase Bound Feb 4 15:07:40.425: INFO: PersistentVolumeClaim directory-pvc-1 found but phase is Pending instead of Bound. Feb 4 15:07:45.493: INFO: PersistentVolumeClaim directory-pvc-1 found and phase=Bound (5.137947856s) Feb 4 15:07:45.561: INFO: Created PVC efs-3612p6srt, bound to PV directory-pvc-1 by dynamic provisioning %!(EXTRA string=pvc-77bee6fc-f575-466e-8046-5b5ab2803a4f)Feb 4 15:08:03.906: INFO: ExecWithOptions {Command:[/bin/sh -c stat -c "%a" /mnt/volume1/dynamic_provisioning/pvc-77bee6fc-f575-466e-8046-5b5ab2803a4f] Namespace:efs-3612 PodName:pvc-tester-tk59g ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 15:08:03.906: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig Feb 4 15:08:04.384: INFO: Perms Output: 755 Feb 4 15:08:04.601: INFO: Deleted StorageClass efs-3612p6srt [AfterEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "efs-3612". �[1mSTEP�[0m: Found 8 events. Feb 4 15:08:04.669: INFO: At 2023-02-04 15:07:40 +0000 UTC - event for directory-pvc-1: {efs.csi.aws.com_efs-csi-controller-597d4d5b58-vcsqc_f3b5a985-85b9-4f4a-989c-7e9d4a3f7bbe } Provisioning: External provisioner is provisioning volume for claim "efs-3612/directory-pvc-1" Feb 4 15:08:04.669: INFO: At 2023-02-04 15:07:40 +0000 UTC - event for directory-pvc-1: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator Feb 4 15:08:04.669: INFO: At 2023-02-04 15:07:40 +0000 UTC - event for directory-pvc-1: {efs.csi.aws.com_efs-csi-controller-597d4d5b58-vcsqc_f3b5a985-85b9-4f4a-989c-7e9d4a3f7bbe } ProvisioningSucceeded: Successfully provisioned volume pvc-77bee6fc-f575-466e-8046-5b5ab2803a4f Feb 4 15:08:04.670: INFO: At 2023-02-04 15:07:45 +0000 UTC - event for pvc-tester-tk59g: {default-scheduler } FailedScheduling: 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. Feb 4 15:08:04.670: INFO: At 2023-02-04 15:08:02 +0000 UTC - event for pvc-tester-tk59g: {default-scheduler } Scheduled: Successfully assigned efs-3612/pvc-tester-tk59g to ip-172-20-37-220.us-west-2.compute.internal Feb 4 15:08:04.670: INFO: At 2023-02-04 15:08:03 +0000 UTC - event for pvc-tester-tk59g: {kubelet ip-172-20-37-220.us-west-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine Feb 4 15:08:04.670: INFO: At 2023-02-04 15:08:03 +0000 UTC - event for pvc-tester-tk59g: {kubelet ip-172-20-37-220.us-west-2.compute.internal} Created: Created container write-pod Feb 4 15:08:04.670: INFO: At 2023-02-04 15:08:03 +0000 UTC - event for pvc-tester-tk59g: {kubelet ip-172-20-37-220.us-west-2.compute.internal} Started: Started container write-pod Feb 4 15:08:04.737: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 15:08:04.738: INFO: pvc-tester-tk59g ip-172-20-37-220.us-west-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-04 15:08:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-02-04 15:08:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-04 15:08:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-04 15:08:02 +0000 UTC }] Feb 4 15:08:04.738: INFO: Feb 4 15:08:04.807: INFO: Logging node info for node ip-172-20-107-163.us-west-2.compute.internal Feb 4 15:08:04.876: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-107-163.us-west-2.compute.internal e58df9f8-2d25-41ba-8246-8b08bc94f1f2 2925 0 2023-02-04 15:01:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2c kops.k8s.io/instancegroup:nodes-us-west-2c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-107-163.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:us-west-2 topology.kubernetes.io/zone:us-west-2c] map[csi.volume.kubernetes.io/nodeid:{"efs.csi.aws.com":"i-07157f6995784a387"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-02-04 15:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-02-04 15:01:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} } {kubelet Update v1 2023-02-04 15:02:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-2c/i-07157f6995784a387,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862945792 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758088192 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-04 15:01:09 +0000 UTC,LastTransitionTime:2023-02-04 15:01:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-04 15:07:16 +0000 UTC,LastTransitionTime:2023-02-04 15:01:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-04 15:07:16 +0000 UTC,LastTransitionTime:2023-02-04 15:01:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-04 15:07:16 +0000 UTC,LastTransitionTime:2023-02-04 15:01:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-04 15:07:16 +0000 UTC,LastTransitionTime:2023-02-04 15:01:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.107.163,},NodeAddress{Type:ExternalIP,Address:34.216.178.202,},NodeAddress{Type:Hostname,Address:ip-172-20-107-163.us-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-107-163.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-216-178-202.us-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec226b9a85116c64b7112937c2abc23b,SystemUUID:ec226b9a-8511-6c64-b711-2937c2abc23b,BootID:e520d3b2-ea52-4bfe-bd00-1e845362707a,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 20.04.5 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.20.8,KubeProxyVersion:v1.20.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver@sha256:57c405141918c4bb49903e87d8eebc972039849e0b665faf23d41ff234d984f0 607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver:20151],SizeBytes:501598206,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:0c867c82a0a8ce6d093595f9d2e4b10517d6c9c26323075de9d82d9f7d056bfc k8s.gcr.io/kube-proxy:v1.20.8],SizeBytes:52056682,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner@sha256:968bbebc038892e1685f187feffbeccc13e327ab1672fb3ab332cca835085cd1 public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner:v3.3.0-eks-1-23-8],SizeBytes:13992487,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:1.8.3],SizeBytes:12893350,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar@sha256:ac2e6e8847edb7eb98b67fc6a9aaa3e270bb4abdf9e17e2fbc4cf9811fb85ec3 public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar:v2.6.1-eks-1-23-8],SizeBytes:6552778,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe@sha256:9977e7ff7147156f7cc67910d28f6085d2faba4b622a098d9e2ce696897172c8 public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.8.0-eks-1-23-8],SizeBytes:6005898,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 4 15:08:04.877: INFO: Logging kubelet events for node ip-172-20-107-163.us-west-2.compute.internal Feb 4 15:08:04.953: INFO: Logging pods the kubelet thinks is on node ip-172-20-107-163.us-west-2.compute.internal Feb 4 15:08:05.053: INFO: efs-csi-controller-597d4d5b58-rch5s started at 2023-02-04 15:01:47 +0000 UTC (0+3 container statuses recorded) Feb 4 15:08:05.053: INFO: Container csi-provisioner ready: true, restart count 0 Feb 4 15:08:05.053: INFO: Container efs-plugin ready: true, restart count 0 Feb 4 15:08:05.053: INFO: Container liveness-probe ready: true, restart count 0 Feb 4 15:08:05.053: INFO: kube-proxy-ip-172-20-107-163.us-west-2.compute.internal started at 2023-02-04 15:00:44 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:05.053: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 15:08:05.053: INFO: coredns-8f5559c9b-gbddh started at 2023-02-04 15:01:15 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:05.053: INFO: Container coredns ready: true, restart count 0 Feb 4 15:08:05.053: INFO: efs-csi-node-xrs9s started at 2023-02-04 15:01:47 +0000 UTC (0+3 container statuses recorded) Feb 4 15:08:05.053: INFO: Container csi-driver-registrar ready: true, restart count 0 Feb 4 15:08:05.053: INFO: Container efs-plugin ready: true, restart count 0 Feb 4 15:08:05.053: INFO: Container liveness-probe ready: true, restart count 0 Feb 4 15:08:05.431: INFO: Latency metrics for node ip-172-20-107-163.us-west-2.compute.internal Feb 4 15:08:05.431: INFO: Logging node info for node ip-172-20-37-220.us-west-2.compute.internal Feb 4 15:08:05.500: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-220.us-west-2.compute.internal de34cadc-4c77-4186-b7a1-a8a4a3cf0284 2744 0 2023-02-04 15:01:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2a kops.k8s.io/instancegroup:nodes-us-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-220.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:us-west-2 topology.kubernetes.io/zone:us-west-2a] map[csi.volume.kubernetes.io/nodeid:{"efs.csi.aws.com":"i-095c38922a72cb81d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-02-04 15:01:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-02-04 15:01:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} } {kubelet Update v1 2023-02-04 15:02:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-2a/i-095c38922a72cb81d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892297728 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787440128 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-04 15:01:19 +0000 UTC,LastTransitionTime:2023-02-04 15:01:19 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-04 15:06:49 +0000 UTC,LastTransitionTime:2023-02-04 15:01:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-04 15:06:49 +0000 UTC,LastTransitionTime:2023-02-04 15:01:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-04 15:06:49 +0000 UTC,LastTransitionTime:2023-02-04 15:01:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-04 15:06:49 +0000 UTC,LastTransitionTime:2023-02-04 15:01:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.220,},NodeAddress{Type:ExternalIP,Address:35.89.135.77,},NodeAddress{Type:Hostname,Address:ip-172-20-37-220.us-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-220.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-89-135-77.us-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2f2931f4cf8d9d1284614ef9be4c0e,SystemUUID:ec2f2931-f4cf-8d9d-1284-614ef9be4c0e,BootID:eb20e8e6-a2ad-4b1a-9d0d-2f88765a628e,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 20.04.5 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.20.8,KubeProxyVersion:v1.20.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver@sha256:57c405141918c4bb49903e87d8eebc972039849e0b665faf23d41ff234d984f0 607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver:20151],SizeBytes:501598206,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:0c867c82a0a8ce6d093595f9d2e4b10517d6c9c26323075de9d82d9f7d056bfc k8s.gcr.io/kube-proxy:v1.20.8],SizeBytes:52056682,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:1.8.3],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar@sha256:ac2e6e8847edb7eb98b67fc6a9aaa3e270bb4abdf9e17e2fbc4cf9811fb85ec3 public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar:v2.6.1-eks-1-23-8],SizeBytes:6552778,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe@sha256:9977e7ff7147156f7cc67910d28f6085d2faba4b622a098d9e2ce696897172c8 public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.8.0-eks-1-23-8],SizeBytes:6005898,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 4 15:08:05.500: INFO: Logging kubelet events for node ip-172-20-37-220.us-west-2.compute.internal Feb 4 15:08:05.572: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-220.us-west-2.compute.internal Feb 4 15:08:05.643: INFO: efs-csi-node-lmjsh started at 2023-02-04 15:01:47 +0000 UTC (0+3 container statuses recorded) Feb 4 15:08:05.643: INFO: Container csi-driver-registrar ready: true, restart count 0 Feb 4 15:08:05.643: INFO: Container efs-plugin ready: true, restart count 0 Feb 4 15:08:05.643: INFO: Container liveness-probe ready: true, restart count 0 Feb 4 15:08:05.643: INFO: pvc-tester-tk59g started at 2023-02-04 15:08:02 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:05.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 15:08:05.643: INFO: kube-proxy-ip-172-20-37-220.us-west-2.compute.internal started at 2023-02-04 15:00:42 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:05.643: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 15:08:05.643: INFO: coredns-8f5559c9b-n9nsj started at 2023-02-04 15:01:31 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:05.643: INFO: Container coredns ready: true, restart count 0 Feb 4 15:08:05.903: INFO: Latency metrics for node ip-172-20-37-220.us-west-2.compute.internal Feb 4 15:08:05.903: INFO: Logging node info for node ip-172-20-51-143.us-west-2.compute.internal Feb 4 15:08:05.973: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-51-143.us-west-2.compute.internal b0c7f42b-6018-44f9-836a-9d939276c301 3104 0 2023-02-04 14:59:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2a kops.k8s.io/instancegroup:master-us-west-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-51-143.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:us-west-2 topology.kubernetes.io/zone:us-west-2a] map[csi.volume.kubernetes.io/nodeid:{"efs.csi.aws.com":"i-0b5736d615f6eaf5f"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{protokube Update v1 2023-02-04 14:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-02-04 15:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} } {kops-controller Update v1 2023-02-04 15:00:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}} } {kubelet Update v1 2023-02-04 15:02:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-2a/i-0b5736d615f6eaf5f,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{66404147200 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051677184 0} {<nil>} 3956716Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{59763732382 0} {<nil>} 59763732382 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946819584 0} {<nil>} 3854316Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-04 15:00:19 +0000 UTC,LastTransitionTime:2023-02-04 15:00:19 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-04 15:07:46 +0000 UTC,LastTransitionTime:2023-02-04 14:59:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-04 15:07:46 +0000 UTC,LastTransitionTime:2023-02-04 14:59:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-04 15:07:46 +0000 UTC,LastTransitionTime:2023-02-04 14:59:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-04 15:07:46 +0000 UTC,LastTransitionTime:2023-02-04 15:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.143,},NodeAddress{Type:ExternalIP,Address:34.217.214.117,},NodeAddress{Type:Hostname,Address:ip-172-20-51-143.us-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-51-143.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-217-214-117.us-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2fb6dba783b504049c17a56aab69eb,SystemUUID:ec2fb6db-a783-b504-049c-17a56aab69eb,BootID:b554468f-bc91-4752-81cb-7dcc93014acd,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 20.04.5 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.20.8,KubeProxyVersion:v1.20.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver@sha256:57c405141918c4bb49903e87d8eebc972039849e0b665faf23d41ff234d984f0 607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver:20151],SizeBytes:501598206,},ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430],SizeBytes:171082409,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:0c867c82a0a8ce6d093595f9d2e4b10517d6c9c26323075de9d82d9f7d056bfc k8s.gcr.io/kube-proxy:v1.20.8],SizeBytes:52056682,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller@sha256:f3724709f264bd47d2251f4ec5b16ec5482e8a2d65da006cd3165eb4da0bd3d1 k8s.gcr.io/kops/dns-controller:1.21.0],SizeBytes:40960953,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller@sha256:05b7d6511df0084aed954a009099402d2a7e7227adf0f410c4583204e6f76429 k8s.gcr.io/kops/kops-controller:1.21.0],SizeBytes:40618104,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:531ffd92bd954f5c27002ce11502771146e7c867c09a6e1755631953ff584df4 k8s.gcr.io/kube-apiserver:v1.20.8],SizeBytes:30462643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:13509f0b36bbd4d2d142cf4c35d77f32ec3728975d96cf833b60b484effa8c43 k8s.gcr.io/kube-controller-manager:v1.20.8],SizeBytes:29554511,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:6fc53cbc3f035ba2b0ac9127e9dc5aa7a95b73f041f7cc14c36cdfc095b13080 k8s.gcr.io/kube-scheduler:v1.20.8],SizeBytes:14243733,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck@sha256:d772e49a0ae9dea319f26aaa40d58a123860b19da6378d29898eb30208359241 k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.0],SizeBytes:11771024,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar@sha256:ac2e6e8847edb7eb98b67fc6a9aaa3e270bb4abdf9e17e2fbc4cf9811fb85ec3 public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar:v2.6.1-eks-1-23-8],SizeBytes:6552778,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe@sha256:9977e7ff7147156f7cc67910d28f6085d2faba4b622a098d9e2ce696897172c8 public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.8.0-eks-1-23-8],SizeBytes:6005898,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 4 15:08:05.973: INFO: Logging kubelet events for node ip-172-20-51-143.us-west-2.compute.internal Feb 4 15:08:06.048: INFO: Logging pods the kubelet thinks is on node ip-172-20-51-143.us-west-2.compute.internal Feb 4 15:08:06.155: INFO: etcd-manager-main-ip-172-20-51-143.us-west-2.compute.internal started at 2023-02-04 14:59:04 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container etcd-manager ready: true, restart count 0 Feb 4 15:08:06.155: INFO: kube-apiserver-ip-172-20-51-143.us-west-2.compute.internal started at 2023-02-04 14:59:04 +0000 UTC (0+2 container statuses recorded) Feb 4 15:08:06.155: INFO: Container healthcheck ready: true, restart count 0 Feb 4 15:08:06.155: INFO: Container kube-apiserver ready: true, restart count 1 Feb 4 15:08:06.155: INFO: kube-proxy-ip-172-20-51-143.us-west-2.compute.internal started at 2023-02-04 14:59:04 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 15:08:06.155: INFO: kube-scheduler-ip-172-20-51-143.us-west-2.compute.internal started at 2023-02-04 14:59:04 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container kube-scheduler ready: true, restart count 0 Feb 4 15:08:06.155: INFO: etcd-manager-events-ip-172-20-51-143.us-west-2.compute.internal started at 2023-02-04 14:59:04 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container etcd-manager ready: true, restart count 0 Feb 4 15:08:06.155: INFO: kops-controller-g2j5q started at 2023-02-04 15:00:18 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container kops-controller ready: true, restart count 0 Feb 4 15:08:06.155: INFO: efs-csi-node-xf6f5 started at 2023-02-04 15:01:47 +0000 UTC (0+3 container statuses recorded) Feb 4 15:08:06.155: INFO: Container csi-driver-registrar ready: true, restart count 0 Feb 4 15:08:06.155: INFO: Container efs-plugin ready: true, restart count 0 Feb 4 15:08:06.155: INFO: Container liveness-probe ready: true, restart count 0 Feb 4 15:08:06.155: INFO: kube-controller-manager-ip-172-20-51-143.us-west-2.compute.internal started at 2023-02-04 14:59:04 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container kube-controller-manager ready: true, restart count 0 Feb 4 15:08:06.155: INFO: dns-controller-5d59c585d8-skd8j started at 2023-02-04 15:00:18 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.155: INFO: Container dns-controller ready: true, restart count 0 Feb 4 15:08:06.507: INFO: Latency metrics for node ip-172-20-51-143.us-west-2.compute.internal Feb 4 15:08:06.507: INFO: Logging node info for node ip-172-20-67-253.us-west-2.compute.internal Feb 4 15:08:06.575: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-67-253.us-west-2.compute.internal 3f0e3f90-4911-41a5-ba7d-97849c0cd813 2146 0 2023-02-04 15:01:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2b kops.k8s.io/instancegroup:nodes-us-west-2b kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-67-253.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:us-west-2 topology.kubernetes.io/zone:us-west-2b] map[csi.volume.kubernetes.io/nodeid:{"efs.csi.aws.com":"i-04ca571056ad388f3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-02-04 15:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-02-04 15:01:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} } {kubelet Update v1 2023-02-04 15:02:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-2b/i-04ca571056ad388f3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862945792 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758088192 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-04 15:01:09 +0000 UTC,LastTransitionTime:2023-02-04 15:01:09 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-04 15:05:45 +0000 UTC,LastTransitionTime:2023-02-04 15:01:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-04 15:05:45 +0000 UTC,LastTransitionTime:2023-02-04 15:01:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-04 15:05:45 +0000 UTC,LastTransitionTime:2023-02-04 15:01:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-04 15:05:45 +0000 UTC,LastTransitionTime:2023-02-04 15:01:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.67.253,},NodeAddress{Type:ExternalIP,Address:54.184.237.132,},NodeAddress{Type:Hostname,Address:ip-172-20-67-253.us-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-67-253.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-184-237-132.us-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b55a9f9fcac735c36755548381ade,SystemUUID:ec2b55a9-f9fc-ac73-5c36-755548381ade,BootID:4e6827d7-98d6-4d57-8950-adc3cb206d50,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 20.04.5 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.20.8,KubeProxyVersion:v1.20.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver@sha256:57c405141918c4bb49903e87d8eebc972039849e0b665faf23d41ff234d984f0 607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver:20151],SizeBytes:501598206,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:0c867c82a0a8ce6d093595f9d2e4b10517d6c9c26323075de9d82d9f7d056bfc k8s.gcr.io/kube-proxy:v1.20.8],SizeBytes:52056682,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7 k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3],SizeBytes:15191740,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner@sha256:968bbebc038892e1685f187feffbeccc13e327ab1672fb3ab332cca835085cd1 public.ecr.aws/eks-distro/kubernetes-csi/external-provisioner:v3.3.0-eks-1-23-8],SizeBytes:13992487,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar@sha256:ac2e6e8847edb7eb98b67fc6a9aaa3e270bb4abdf9e17e2fbc4cf9811fb85ec3 public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar:v2.6.1-eks-1-23-8],SizeBytes:6552778,},ContainerImage{Names:[public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe@sha256:9977e7ff7147156f7cc67910d28f6085d2faba4b622a098d9e2ce696897172c8 public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe:v2.8.0-eks-1-23-8],SizeBytes:6005898,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 4 15:08:06.576: INFO: Logging kubelet events for node ip-172-20-67-253.us-west-2.compute.internal Feb 4 15:08:06.645: INFO: Logging pods the kubelet thinks is on node ip-172-20-67-253.us-west-2.compute.internal Feb 4 15:08:06.718: INFO: efs-csi-node-j86gj started at 2023-02-04 15:01:47 +0000 UTC (0+3 container statuses recorded) Feb 4 15:08:06.718: INFO: Container csi-driver-registrar ready: true, restart count 0 Feb 4 15:08:06.718: INFO: Container efs-plugin ready: true, restart count 0 Feb 4 15:08:06.718: INFO: Container liveness-probe ready: true, restart count 0 Feb 4 15:08:06.718: INFO: efs-csi-controller-597d4d5b58-vcsqc started at 2023-02-04 15:01:47 +0000 UTC (0+3 container statuses recorded) Feb 4 15:08:06.718: INFO: Container csi-provisioner ready: true, restart count 0 Feb 4 15:08:06.718: INFO: Container efs-plugin ready: true, restart count 0 Feb 4 15:08:06.718: INFO: Container liveness-probe ready: true, restart count 0 Feb 4 15:08:06.718: INFO: pod-a6bec740-a9bd-4647-8b99-879b23b32217 started at 2023-02-04 15:07:26 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.718: INFO: Container write-pod ready: false, restart count 0 Feb 4 15:08:06.718: INFO: kube-proxy-ip-172-20-67-253.us-west-2.compute.internal started at 2023-02-04 15:00:44 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.718: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 15:08:06.718: INFO: coredns-autoscaler-6f594f4c58-hdzn8 started at 2023-02-04 15:01:15 +0000 UTC (0+1 container statuses recorded) Feb 4 15:08:06.718: INFO: Container autoscaler ready: true, restart count 0 Feb 4 15:08:07.007: INFO: Latency metrics for node ip-172-20-67-253.us-west-2.compute.internal Feb 4 15:08:07.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "efs-3612" for this suite.
Filter through log files | View test history on testgrid
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] should continue reading/writing without hanging after the driver pod is restarted
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] should delete a directory provisioned in directory provisioning mode
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] should mount different paths on same volume on same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] should mount with option tls when encryptInTransit set true
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] should mount with option tls when encryptInTransit unset
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] should mount without option tls when encryptInTransit set false
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
EFS CSI Suite [efs-csi] EFS CSI [Driver: efs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
... skipping 330 lines ... #0 2.605 --> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.16.3-299.amzn2.0.1.x86_64 #0 2.605 ---> Package perl-File-Temp.noarch 0:0.23.01-3.amzn2 will be installed #0 2.607 ---> Package perl-Getopt-Long.noarch 0:2.40-3.amzn2 will be installed #0 2.608 --> Processing Dependency: perl(Pod::Usage) >= 1.14 for package: perl-Getopt-Long-2.40-3.amzn2.noarch #0 2.609 --> Processing Dependency: perl(Text::ParseWords) for package: perl-Getopt-Long-2.40-3.amzn2.noarch #0 2.610 ---> Package perl-Git.noarch 0:2.39.1-1.amzn2.0.1 will be installed #0 2.611 --> Processing Dependency: perl(Error) for package: perl-Git-2.39.1-1.amzn2.0.1.noarch #0 2.612 ---> Package perl-PathTools.x86_64 0:3.40-5.amzn2.0.2 will be installed #0 2.614 ---> Package perl-TermReadKey.x86_64 0:2.30-20.amzn2.0.2 will be installed #0 2.615 ---> Package perl-Thread-Queue.noarch 0:3.02-2.amzn2 will be installed #0 2.616 ---> Package perl-threads.x86_64 0:1.87-4.amzn2.0.2 will be installed #0 2.618 ---> Package pkgconfig.x86_64 1:0.27.1-4.amzn2.0.2 will be installed #0 2.621 ---> Package system-rpm-config.noarch 0:9.1.0-76.amzn2.0.14 will be installed ... skipping 15 lines ... #0 2.700 --> Processing Dependency: openssh = 7.4p1-22.amzn2.0.1 for package: openssh-clients-7.4p1-22.amzn2.0.1.x86_64 #0 2.703 --> Processing Dependency: fipscheck-lib(x86-64) >= 1.3.0 for package: openssh-clients-7.4p1-22.amzn2.0.1.x86_64 #0 2.705 --> Processing Dependency: libfipscheck.so.1()(64bit) for package: openssh-clients-7.4p1-22.amzn2.0.1.x86_64 #0 2.705 --> Processing Dependency: libedit.so.0()(64bit) for package: openssh-clients-7.4p1-22.amzn2.0.1.x86_64 #0 2.707 ---> Package pcre2.x86_64 0:10.23-11.amzn2.0.1 will be installed #0 2.709 ---> Package perl-Carp.noarch 0:1.26-244.amzn2 will be installed #0 2.710 ---> Package perl-Error.noarch 1:0.17020-2.amzn2 will be installed #0 2.711 ---> Package perl-Exporter.noarch 0:5.68-3.amzn2 will be installed #0 2.712 ---> Package perl-File-Path.noarch 0:2.09-2.amzn2 will be installed #0 2.713 ---> Package perl-Filter.x86_64 0:1.49-3.amzn2.0.2 will be installed #0 2.715 ---> Package perl-Pod-Simple.noarch 1:3.28-4.amzn2 will be installed #0 2.720 --> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.28-4.amzn2.noarch #0 2.722 --> Processing Dependency: perl(Encode) for package: 1:perl-Pod-Simple-3.28-4.amzn2.noarch ... skipping 161 lines ... #0 3.185 pam x86_64 1.1.8-23.amzn2.0.1 amzn2-core 715 k #0 3.185 patch x86_64 2.7.1-12.amzn2.0.2 amzn2-core 110 k #0 3.185 pcre2 x86_64 10.23-11.amzn2.0.1 amzn2-core 207 k #0 3.185 perl x86_64 4:5.16.3-299.amzn2.0.1 amzn2-core 8.0 M #0 3.185 perl-Carp noarch 1.26-244.amzn2 amzn2-core 19 k #0 3.185 perl-Encode x86_64 2.51-7.amzn2.0.2 amzn2-core 1.5 M #0 3.185 perl-Error noarch 1:0.17020-2.amzn2 amzn2-core 32 k #0 3.185 perl-Exporter noarch 5.68-3.amzn2 amzn2-core 29 k #0 3.185 perl-File-Path noarch 2.09-2.amzn2 amzn2-core 27 k #0 3.185 perl-File-Temp noarch 0.23.01-3.amzn2 amzn2-core 56 k #0 3.185 perl-Filter x86_64 1.49-3.amzn2.0.2 amzn2-core 76 k #0 3.185 perl-Getopt-Long noarch 2.40-3.amzn2 amzn2-core 56 k #0 3.185 perl-Git noarch 2.39.1-1.amzn2.0.1 amzn2-core 41 k ... skipping 81 lines ... #13 9.211 Installing : 1:perl-Pod-Simple-3.28-4.amzn2.noarch 34/87 #13 9.260 Installing : perl-Getopt-Long-2.40-3.amzn2.noarch 35/87 #13 9.413 Installing : 4:perl-libs-5.16.3-299.amzn2.0.1.x86_64 36/87 #13 11.08 Installing : 4:perl-5.16.3-299.amzn2.0.1.x86_64 37/87 #13 11.15 Installing : perl-Thread-Queue-3.02-2.amzn2.noarch 38/87 #13 11.20 Installing : perl-TermReadKey-2.30-20.amzn2.0.2.x86_64 39/87 #13 11.24 Installing : 1:perl-Error-0.17020-2.amzn2.noarch 40/87 #13 11.27 Installing : fipscheck-lib-1.4.1-6.amzn2.0.2.x86_64 41/87 #13 11.31 Installing : fipscheck-1.4.1-6.amzn2.0.2.x86_64 42/87 #13 11.36 Installing : dwz-0.11-3.amzn2.0.3.x86_64 43/87 #13 11.40 Installing : 1:pkgconfig-0.27.1-4.amzn2.0.2.x86_64 44/87 #13 11.44 Installing : kmod-libs-25-3.amzn2.0.2.x86_64 45/87 #13 11.53 Installing : unzip-6.0-57.amzn2.0.1.x86_64 46/87 ... skipping 32 lines ... #13 15.97 Installing : 7:device-mapper-libs-1.02.170-6.amzn2.5.x86_64 71/87 #13 16.07 Installing : cryptsetup-libs-1.7.4-4.amzn2.x86_64 72/87 #13 16.17 Installing : elfutils-libs-0.176-2.amzn2.x86_64 73/87 #13 16.30 Installing : systemd-libs-219-78.amzn2.0.21.x86_64 74/87 #13 16.37 Installing : 1:dbus-libs-1.10.24-7.amzn2.0.2.x86_64 75/87 #13 17.91 Installing : systemd-219-78.amzn2.0.21.x86_64 76/87 #13 18.51 Failed to get D-Bus connection: Operation not permitted #13 18.65 Installing : 1:dbus-1.10.24-7.amzn2.0.2.x86_64 77/87 #13 18.69 Installing : elfutils-default-yama-scope-0.176-2.amzn2.noarch 78/87 #13 18.79 Installing : elfutils-0.176-2.amzn2.x86_64 79/87 #13 18.96 Installing : openssh-7.4p1-22.amzn2.0.1.x86_64 80/87 #13 19.12 Installing : openssh-clients-7.4p1-22.amzn2.0.1.x86_64 81/87 #13 20.91 Installing : git-core-2.39.1-1.amzn2.0.1.x86_64 82/87 ... skipping 57 lines ... #13 23.01 Verifying : util-linux-2.30.2-2.amzn2.0.11.x86_64 53/87 #13 23.02 Verifying : pam-1.1.8-23.amzn2.0.1.x86_64 54/87 #13 23.04 Verifying : xz-5.2.2-1.amzn2.0.3.x86_64 55/87 #13 23.05 Verifying : ustr-1.0.4-16.amzn2.0.3.x86_64 56/87 #13 23.07 Verifying : less-458-9.amzn2.0.2.x86_64 57/87 #13 23.08 Verifying : 1:perl-Pod-Escapes-1.04-299.amzn2.0.1.noarch 58/87 #13 23.10 Verifying : 1:perl-Error-0.17020-2.amzn2.noarch 59/87 #13 23.11 Verifying : perl-Pod-Usage-1.63-3.amzn2.noarch 60/87 #13 23.12 Verifying : 1:perl-parent-0.225-244.amzn2.0.1.noarch 61/87 #13 23.14 Verifying : perl-Pod-Perldoc-3.20-4.amzn2.noarch 62/87 #13 23.15 Verifying : 2:tar-1.26-35.amzn2.x86_64 63/87 #13 23.16 Verifying : zip-3.0-11.amzn2.0.2.x86_64 64/87 #13 23.18 Verifying : 1:dbus-libs-1.10.24-7.amzn2.0.2.x86_64 65/87 ... skipping 65 lines ... #13 23.57 pam.x86_64 0:1.1.8-23.amzn2.0.1 #13 23.57 patch.x86_64 0:2.7.1-12.amzn2.0.2 #13 23.57 pcre2.x86_64 0:10.23-11.amzn2.0.1 #13 23.57 perl.x86_64 4:5.16.3-299.amzn2.0.1 #13 23.57 perl-Carp.noarch 0:1.26-244.amzn2 #13 23.57 perl-Encode.x86_64 0:2.51-7.amzn2.0.2 #13 23.57 perl-Error.noarch 1:0.17020-2.amzn2 #13 23.57 perl-Exporter.noarch 0:5.68-3.amzn2 #13 23.57 perl-File-Path.noarch 0:2.09-2.amzn2 #13 23.57 perl-File-Temp.noarch 0:0.23.01-3.amzn2 #13 23.57 perl-Filter.x86_64 0:1.49-3.amzn2.0.2 #13 23.57 perl-Getopt-Long.noarch 0:2.40-3.amzn2 #13 23.57 perl-Git.noarch 0:2.39.1-1.amzn2.0.1 ... skipping 514 lines ... #19 exporting to image #19 exporting layers #19 exporting layers 2.9s done #19 writing image sha256:9b20a07b2ece60d015b990892d877f7ddd0752c77b6f286877523f5fa7504472 done #19 naming to 607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver:20151-linux-amd64-amazon done #19 DONE 2.9s WARNING: failed to get git remote url: fatal: No remote configured to list refs from. touch .image-20151-linux-amd64-amazon make[1]: Leaving directory '/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver' The push refers to repository [607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-efs-csi-driver] c9ede89d7aea: Preparing c1a5acef8786: Preparing f0936a55bfc4: Preparing ... skipping 118 lines ... ## Validating cluster test-cluster-20151.k8s.local # Using cluster from kubectl context: test-cluster-20151.k8s.local Validating cluster test-cluster-20151.k8s.local W0204 14:57:22.056688 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0204 14:57:32.096740 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0204 14:57:42.116522 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0204 14:57:52.154306 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0204 14:58:06.833886 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-20151-k8-tpalq9-2129878425.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0204 14:58:28.550295 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0204 14:58:50.226692 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0204 14:59:11.934346 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0204 14:59:33.627891 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0204 14:59:55.308670 5550 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b nodes-us-west-2c Node c5.large 1 1 us-west-2c ... skipping 6 lines ... KIND NAME MESSAGE Machine i-04ca571056ad388f3 machine "i-04ca571056ad388f3" has not yet joined cluster Machine i-07157f6995784a387 machine "i-07157f6995784a387" has not yet joined cluster Machine i-095c38922a72cb81d machine "i-095c38922a72cb81d" has not yet joined cluster Node ip-172-20-51-143.us-west-2.compute.internal node "ip-172-20-51-143.us-west-2.compute.internal" of role "master" is not ready Validation Failed W0204 15:00:08.008831 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-51-143.us-west-2.compute.internal node "ip-172-20-51-143.us-west-2.compute.internal" of role "master" is not ready Pod kube-system/coredns-8f5559c9b-gbddh system-cluster-critical pod "coredns-8f5559c9b-gbddh" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-hdzn8 system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hdzn8" is pending Pod kube-system/dns-controller-5d59c585d8-skd8j system-cluster-critical pod "dns-controller-5d59c585d8-skd8j" is pending Pod kube-system/kops-controller-g2j5q system-node-critical pod "kops-controller-g2j5q" is pending Validation Failed W0204 15:00:20.401821 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 12 lines ... Node ip-172-20-51-143.us-west-2.compute.internal master "ip-172-20-51-143.us-west-2.compute.internal" is missing kube-controller-manager pod Node ip-172-20-51-143.us-west-2.compute.internal master "ip-172-20-51-143.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-gbddh system-cluster-critical pod "coredns-8f5559c9b-gbddh" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-hdzn8 system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hdzn8" is pending Pod kube-system/kube-scheduler-ip-172-20-51-143.us-west-2.compute.internal system-cluster-critical pod "kube-scheduler-ip-172-20-51-143.us-west-2.compute.internal" is pending Validation Failed W0204 15:00:32.515789 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 13 lines ... Pod kube-system/coredns-8f5559c9b-gbddh system-cluster-critical pod "coredns-8f5559c9b-gbddh" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-hdzn8 system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hdzn8" is pending Pod kube-system/etcd-manager-events-ip-172-20-51-143.us-west-2.compute.internal system-cluster-critical pod "etcd-manager-events-ip-172-20-51-143.us-west-2.compute.internal" is pending Pod kube-system/kube-controller-manager-ip-172-20-51-143.us-west-2.compute.internal system-cluster-critical pod "kube-controller-manager-ip-172-20-51-143.us-west-2.compute.internal" is pending Pod kube-system/kube-proxy-ip-172-20-51-143.us-west-2.compute.internal system-node-critical pod "kube-proxy-ip-172-20-51-143.us-west-2.compute.internal" is pending Validation Failed W0204 15:00:44.550603 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 8 lines ... Machine i-04ca571056ad388f3 machine "i-04ca571056ad388f3" has not yet joined cluster Machine i-07157f6995784a387 machine "i-07157f6995784a387" has not yet joined cluster Machine i-095c38922a72cb81d machine "i-095c38922a72cb81d" has not yet joined cluster Pod kube-system/coredns-8f5559c9b-gbddh system-cluster-critical pod "coredns-8f5559c9b-gbddh" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-hdzn8 system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hdzn8" is pending Validation Failed W0204 15:00:56.634287 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 10 lines ... Machine i-095c38922a72cb81d machine "i-095c38922a72cb81d" has not yet joined cluster Node ip-172-20-107-163.us-west-2.compute.internal node "ip-172-20-107-163.us-west-2.compute.internal" of role "node" is not ready Node ip-172-20-67-253.us-west-2.compute.internal node "ip-172-20-67-253.us-west-2.compute.internal" of role "node" is not ready Pod kube-system/coredns-8f5559c9b-gbddh system-cluster-critical pod "coredns-8f5559c9b-gbddh" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-hdzn8 system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hdzn8" is pending Validation Failed W0204 15:01:08.796939 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 8 lines ... VALIDATION ERRORS KIND NAME MESSAGE Node ip-172-20-37-220.us-west-2.compute.internal node "ip-172-20-37-220.us-west-2.compute.internal" of role "node" is not ready Pod kube-system/coredns-autoscaler-6f594f4c58-hdzn8 system-cluster-critical pod "coredns-autoscaler-6f594f4c58-hdzn8" is pending Validation Failed W0204 15:01:21.103905 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 7 lines ... ip-172-20-67-253.us-west-2.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-n9nsj system-cluster-critical pod "coredns-8f5559c9b-n9nsj" is pending Validation Failed W0204 15:01:33.275232 5550 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 312 lines ... Feb 4 15:04:54.427: INFO: PersistentVolumeClaim pvc-p6jtr found but phase is Pending instead of Bound. Feb 4 15:04:56.493: INFO: PersistentVolumeClaim pvc-p6jtr found and phase=Bound (2.132636734s) Feb 4 15:04:56.493: INFO: Waiting up to 3m0s for PersistentVolume efs.csi.aws.com-78mqs to have phase Bound Feb 4 15:04:56.559: INFO: PersistentVolume efs.csi.aws.com-78mqs found and phase=Bound (65.974099ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-xcjx [1mSTEP[0m: Creating a pod to test exec-volume-test Feb 4 15:04:56.762: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-xcjx" in namespace "volume-8511" to be "Succeeded or Failed" Feb 4 15:04:56.830: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 68.20591ms Feb 4 15:04:58.897: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135562364s Feb 4 15:05:00.964: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201861943s Feb 4 15:05:03.034: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271894684s Feb 4 15:05:05.101: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339086846s Feb 4 15:05:07.168: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406492007s Feb 4 15:05:09.235: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.473450727s Feb 4 15:05:11.302: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.540357298s Feb 4 15:05:13.370: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.608013204s Feb 4 15:05:15.441: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.679429652s Feb 4 15:05:17.512: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Pending", Reason="", readiness=false. Elapsed: 20.750024832s Feb 4 15:05:19.579: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.81718063s [1mSTEP[0m: Saw pod success Feb 4 15:05:19.579: INFO: Pod "exec-volume-test-preprovisionedpv-xcjx" satisfied condition "Succeeded or Failed" Feb 4 15:05:19.645: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-xcjx container exec-container-preprovisionedpv-xcjx: <nil> [1mSTEP[0m: delete the pod Feb 4 15:05:19.784: INFO: Waiting for pod exec-volume-test-preprovisionedpv-xcjx to disappear Feb 4 15:05:19.851: INFO: Pod exec-volume-test-preprovisionedpv-xcjx no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-xcjx Feb 4 15:05:19.851: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xcjx" in namespace "volume-8511" ... skipping 51 lines ... [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244 [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath file is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 6 lines ... [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:04:56.746: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256 Feb 4 15:04:57.305: INFO: Creating resource for dynamic PV Feb 4 15:04:57.305: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(efs.csi.aws.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass [1mSTEP[0m: creating a claim Feb 4 15:04:57.376: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:04:57.448: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comc2z2w] to have phase Bound Feb 4 15:04:57.517: INFO: PersistentVolumeClaim efs.csi.aws.comc2z2w found but phase is Pending instead of Bound. Feb 4 15:04:59.587: INFO: PersistentVolumeClaim efs.csi.aws.comc2z2w found and phase=Bound (2.139094844s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-jjqn [1mSTEP[0m: Checking for subpath error in container status Feb 4 15:05:21.944: INFO: Deleting pod "pod-subpath-test-dynamicpv-jjqn" in namespace "provisioning-1528" Feb 4 15:05:22.016: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-jjqn" to be fully deleted [1mSTEP[0m: Deleting pod Feb 4 15:05:26.155: INFO: Deleting pod "pod-subpath-test-dynamicpv-jjqn" in namespace "provisioning-1528" [1mSTEP[0m: Deleting pvc Feb 4 15:05:26.223: INFO: Deleting PersistentVolumeClaim "efs.csi.aws.comc2z2w" ... skipping 14 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath file is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support non-existent path[0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194[0m ... skipping 24 lines ... Feb 4 15:04:51.749: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comrhwfs] to have phase Bound Feb 4 15:04:51.816: INFO: PersistentVolumeClaim efs.csi.aws.comrhwfs found but phase is Pending instead of Bound. Feb 4 15:04:53.883: INFO: PersistentVolumeClaim efs.csi.aws.comrhwfs found but phase is Pending instead of Bound. Feb 4 15:04:55.949: INFO: PersistentVolumeClaim efs.csi.aws.comrhwfs found and phase=Bound (4.20034881s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-vvqx [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:04:56.150: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vvqx" in namespace "provisioning-1318" to be "Succeeded or Failed" Feb 4 15:04:56.277: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 126.708711ms Feb 4 15:04:58.344: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193594862s Feb 4 15:05:00.411: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260507632s Feb 4 15:05:02.477: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327127534s Feb 4 15:05:04.544: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394050445s Feb 4 15:05:06.611: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.460919166s ... skipping 2 lines ... Feb 4 15:05:12.814: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.664194286s Feb 4 15:05:14.882: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.731380709s Feb 4 15:05:16.949: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 20.798577308s Feb 4 15:05:19.017: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Pending", Reason="", readiness=false. Elapsed: 22.866883177s Feb 4 15:05:21.084: INFO: Pod "pod-subpath-test-dynamicpv-vvqx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.934279588s [1mSTEP[0m: Saw pod success Feb 4 15:05:21.085: INFO: Pod "pod-subpath-test-dynamicpv-vvqx" satisfied condition "Succeeded or Failed" Feb 4 15:05:21.150: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-vvqx container test-container-volume-dynamicpv-vvqx: <nil> [1mSTEP[0m: delete the pod Feb 4 15:05:21.294: INFO: Waiting for pod pod-subpath-test-dynamicpv-vvqx to disappear Feb 4 15:05:21.360: INFO: Pod pod-subpath-test-dynamicpv-vvqx no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-vvqx Feb 4 15:05:21.360: INFO: Deleting pod "pod-subpath-test-dynamicpv-vvqx" in namespace "provisioning-1318" ... skipping 48 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver efs.csi.aws.com doesn't support InlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 195 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver efs.csi.aws.com doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m ... skipping 59 lines ... Feb 4 15:05:21.739: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:05:21.810: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comtp5zq] to have phase Bound Feb 4 15:05:21.876: INFO: PersistentVolumeClaim efs.csi.aws.comtp5zq found but phase is Pending instead of Bound. Feb 4 15:05:23.943: INFO: PersistentVolumeClaim efs.csi.aws.comtp5zq found and phase=Bound (2.133549226s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-sr72 [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:05:24.146: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-sr72" in namespace "provisioning-8287" to be "Succeeded or Failed" Feb 4 15:05:24.212: INFO: Pod "pod-subpath-test-dynamicpv-sr72": Phase="Pending", Reason="", readiness=false. Elapsed: 66.264801ms Feb 4 15:05:26.279: INFO: Pod "pod-subpath-test-dynamicpv-sr72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1331851s Feb 4 15:05:28.346: INFO: Pod "pod-subpath-test-dynamicpv-sr72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199736587s [1mSTEP[0m: Saw pod success Feb 4 15:05:28.346: INFO: Pod "pod-subpath-test-dynamicpv-sr72" satisfied condition "Succeeded or Failed" Feb 4 15:05:28.411: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-sr72 container test-container-subpath-dynamicpv-sr72: <nil> [1mSTEP[0m: delete the pod Feb 4 15:05:28.551: INFO: Waiting for pod pod-subpath-test-dynamicpv-sr72 to disappear Feb 4 15:05:28.616: INFO: Pod pod-subpath-test-dynamicpv-sr72 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-sr72 Feb 4 15:05:28.616: INFO: Deleting pod "pod-subpath-test-dynamicpv-sr72" in namespace "provisioning-8287" ... skipping 147 lines ... [1mSTEP[0m: Building a namespace api object, basename efs [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should mount with option tls when encryptInTransit set true /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:367 [1mSTEP[0m: Creating efs pvc & pv [1mSTEP[0m: Creating pod to mount pvc "efs-3237" and run "mount && mount | grep /mnt/volume1 | grep 127.0.0.1" Feb 4 15:04:58.941: INFO: Waiting up to 5m0s for pod "pvc-tester-nclcs" in namespace "efs-3237" to be "Succeeded or Failed" Feb 4 15:04:59.010: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 68.889256ms Feb 4 15:05:01.079: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137669679s Feb 4 15:05:03.149: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207602505s Feb 4 15:05:05.219: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277885559s Feb 4 15:05:07.288: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.3466512s Feb 4 15:05:09.358: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.41662543s ... skipping 8 lines ... Feb 4 15:05:27.998: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 29.056652368s Feb 4 15:05:30.067: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.125599932s Feb 4 15:05:32.138: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 33.1960824s Feb 4 15:05:34.207: INFO: Pod "pvc-tester-nclcs": Phase="Pending", Reason="", readiness=false. Elapsed: 35.265088077s Feb 4 15:05:36.277: INFO: Pod "pvc-tester-nclcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.335782022s [1mSTEP[0m: Saw pod success Feb 4 15:05:36.277: INFO: Pod "pvc-tester-nclcs" satisfied condition "Succeeded or Failed" Feb 4 15:05:36.348: INFO: pod "pvc-tester-nclcs" logs: overlay on / type overlay (rw,relatime,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/53/fs,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/60/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/60/work) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) ... skipping 252 lines ... Feb 4 15:05:29.607: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:05:29.679: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comm6n7l] to have phase Bound Feb 4 15:05:29.748: INFO: PersistentVolumeClaim efs.csi.aws.comm6n7l found but phase is Pending instead of Bound. Feb 4 15:05:31.817: INFO: PersistentVolumeClaim efs.csi.aws.comm6n7l found and phase=Bound (2.137810137s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-lhwz [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:05:32.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lhwz" in namespace "provisioning-7189" to be "Succeeded or Failed" Feb 4 15:05:32.106: INFO: Pod "pod-subpath-test-dynamicpv-lhwz": Phase="Pending", Reason="", readiness=false. Elapsed: 70.326348ms Feb 4 15:05:34.174: INFO: Pod "pod-subpath-test-dynamicpv-lhwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138975894s Feb 4 15:05:36.244: INFO: Pod "pod-subpath-test-dynamicpv-lhwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208693792s Feb 4 15:05:38.313: INFO: Pod "pod-subpath-test-dynamicpv-lhwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.277353976s [1mSTEP[0m: Saw pod success Feb 4 15:05:38.313: INFO: Pod "pod-subpath-test-dynamicpv-lhwz" satisfied condition "Succeeded or Failed" Feb 4 15:05:38.381: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-lhwz container test-container-subpath-dynamicpv-lhwz: <nil> [1mSTEP[0m: delete the pod Feb 4 15:05:38.527: INFO: Waiting for pod pod-subpath-test-dynamicpv-lhwz to disappear Feb 4 15:05:38.594: INFO: Pod pod-subpath-test-dynamicpv-lhwz no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-lhwz Feb 4 15:05:38.594: INFO: Deleting pod "pod-subpath-test-dynamicpv-lhwz" in namespace "provisioning-7189" ... skipping 796 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDriver efs.csi.aws.com doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m ... skipping 111 lines ... [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:06:27.842: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Feb 4 15:06:28.175: INFO: Driver "efs.csi.aws.com" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 15:06:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-8010" for this suite. ... skipping 7 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [It][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [36mDriver "efs.csi.aws.com" does not provide raw block - skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113 [90m------------------------------[0m ... skipping 86 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver efs.csi.aws.com doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m ... skipping 53 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver efs.csi.aws.com doesn't support InlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 76 lines ... [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:06:30.185: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Feb 4 15:06:30.514: INFO: Driver "efs.csi.aws.com" does not provide raw block - skipping [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 15:06:30.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-6052" for this suite. ... skipping 7 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [It][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [36mDriver "efs.csi.aws.com" does not provide raw block - skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113 [90m------------------------------[0m ... skipping 762 lines ... Feb 4 15:08:08.099: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:08:08.177: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comzl6kb] to have phase Bound Feb 4 15:08:08.246: INFO: PersistentVolumeClaim efs.csi.aws.comzl6kb found but phase is Pending instead of Bound. Feb 4 15:08:10.316: INFO: PersistentVolumeClaim efs.csi.aws.comzl6kb found and phase=Bound (2.138295777s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-226f [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:08:10.541: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-226f" in namespace "provisioning-8844" to be "Succeeded or Failed" Feb 4 15:08:10.611: INFO: Pod "pod-subpath-test-dynamicpv-226f": Phase="Pending", Reason="", readiness=false. Elapsed: 70.150054ms Feb 4 15:08:12.696: INFO: Pod "pod-subpath-test-dynamicpv-226f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155347327s Feb 4 15:08:14.765: INFO: Pod "pod-subpath-test-dynamicpv-226f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224422969s [1mSTEP[0m: Saw pod success Feb 4 15:08:14.765: INFO: Pod "pod-subpath-test-dynamicpv-226f" satisfied condition "Succeeded or Failed" Feb 4 15:08:14.833: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-226f container test-container-subpath-dynamicpv-226f: <nil> [1mSTEP[0m: delete the pod Feb 4 15:08:14.977: INFO: Waiting for pod pod-subpath-test-dynamicpv-226f to disappear Feb 4 15:08:15.045: INFO: Pod pod-subpath-test-dynamicpv-226f no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-226f Feb 4 15:08:15.045: INFO: Deleting pod "pod-subpath-test-dynamicpv-226f" in namespace "provisioning-8844" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-226f [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:08:15.186: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-226f" in namespace "provisioning-8844" to be "Succeeded or Failed" Feb 4 15:08:15.255: INFO: Pod "pod-subpath-test-dynamicpv-226f": Phase="Pending", Reason="", readiness=false. Elapsed: 68.759982ms Feb 4 15:08:17.326: INFO: Pod "pod-subpath-test-dynamicpv-226f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.139697975s [1mSTEP[0m: Saw pod success Feb 4 15:08:17.326: INFO: Pod "pod-subpath-test-dynamicpv-226f" satisfied condition "Succeeded or Failed" Feb 4 15:08:17.395: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-226f container test-container-subpath-dynamicpv-226f: <nil> [1mSTEP[0m: delete the pod Feb 4 15:08:17.540: INFO: Waiting for pod pod-subpath-test-dynamicpv-226f to disappear Feb 4 15:08:17.616: INFO: Pod pod-subpath-test-dynamicpv-226f no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-226f Feb 4 15:08:17.616: INFO: Deleting pod "pod-subpath-test-dynamicpv-226f" in namespace "provisioning-8844" ... skipping 322 lines ... [1mSTEP[0m: creating a claim Feb 4 15:08:24.290: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:08:24.360: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comc9xb4] to have phase Bound Feb 4 15:08:24.429: INFO: PersistentVolumeClaim efs.csi.aws.comc9xb4 found but phase is Pending instead of Bound. Feb 4 15:08:26.499: INFO: PersistentVolumeClaim efs.csi.aws.comc9xb4 found and phase=Bound (2.1389976s) [1mSTEP[0m: Creating pod to format volume volume-prep-provisioning-7578 Feb 4 15:08:26.711: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-7578" in namespace "provisioning-7578" to be "Succeeded or Failed" Feb 4 15:08:26.779: INFO: Pod "volume-prep-provisioning-7578": Phase="Pending", Reason="", readiness=false. Elapsed: 67.925369ms Feb 4 15:08:28.847: INFO: Pod "volume-prep-provisioning-7578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136286329s Feb 4 15:08:30.916: INFO: Pod "volume-prep-provisioning-7578": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204727329s Feb 4 15:08:32.986: INFO: Pod "volume-prep-provisioning-7578": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274544794s Feb 4 15:08:35.055: INFO: Pod "volume-prep-provisioning-7578": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343638709s Feb 4 15:08:37.125: INFO: Pod "volume-prep-provisioning-7578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.413907566s [1mSTEP[0m: Saw pod success Feb 4 15:08:37.125: INFO: Pod "volume-prep-provisioning-7578" satisfied condition "Succeeded or Failed" Feb 4 15:08:37.125: INFO: Deleting pod "volume-prep-provisioning-7578" in namespace "provisioning-7578" Feb 4 15:08:37.199: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-7578" to be fully deleted [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-cgkb [1mSTEP[0m: Checking for subpath error in container status Feb 4 15:08:39.474: INFO: Deleting pod "pod-subpath-test-dynamicpv-cgkb" in namespace "provisioning-7578" Feb 4 15:08:39.551: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-cgkb" to be fully deleted [1mSTEP[0m: Deleting pod Feb 4 15:08:43.695: INFO: Deleting pod "pod-subpath-test-dynamicpv-cgkb" in namespace "provisioning-7578" [1mSTEP[0m: Deleting pvc Feb 4 15:08:43.762: INFO: Deleting PersistentVolumeClaim "efs.csi.aws.comc9xb4" ... skipping 112 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244 [90m------------------------------[0m ... skipping 53 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDriver efs.csi.aws.com doesn't support InlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 10 lines ... [1mSTEP[0m: Building a namespace api object, basename efs [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should mount without option tls when encryptInTransit set false /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:372 [1mSTEP[0m: Creating efs pvc & pv [1mSTEP[0m: Creating pod to mount pvc "efs-9771" and run "mount && mount | grep /mnt/volume1 | grep fs-041c0971c1ac6d6b8" Feb 4 15:08:51.297: INFO: Waiting up to 5m0s for pod "pvc-tester-b5468" in namespace "efs-9771" to be "Succeeded or Failed" Feb 4 15:08:51.362: INFO: Pod "pvc-tester-b5468": Phase="Pending", Reason="", readiness=false. Elapsed: 64.839904ms Feb 4 15:08:53.428: INFO: Pod "pvc-tester-b5468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130373426s Feb 4 15:08:55.493: INFO: Pod "pvc-tester-b5468": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195600822s Feb 4 15:08:57.559: INFO: Pod "pvc-tester-b5468": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261690476s Feb 4 15:08:59.626: INFO: Pod "pvc-tester-b5468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.329111768s [1mSTEP[0m: Saw pod success Feb 4 15:08:59.626: INFO: Pod "pvc-tester-b5468" satisfied condition "Succeeded or Failed" Feb 4 15:08:59.693: INFO: pod "pvc-tester-b5468" logs: overlay on / type overlay (rw,relatime,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/53/fs,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/114/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/114/work) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) ... skipping 69 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244 [90m------------------------------[0m ... skipping 28 lines ... [36mDriver efs.csi.aws.com doesn't support xfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 6 lines ... [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:09:01.164: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240 Feb 4 15:09:01.706: INFO: Creating resource for dynamic PV Feb 4 15:09:01.706: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(efs.csi.aws.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass [1mSTEP[0m: creating a claim Feb 4 15:09:01.777: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:09:01.864: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.com8h5g7] to have phase Bound Feb 4 15:09:01.937: INFO: PersistentVolumeClaim efs.csi.aws.com8h5g7 found but phase is Pending instead of Bound. Feb 4 15:09:04.012: INFO: PersistentVolumeClaim efs.csi.aws.com8h5g7 found and phase=Bound (2.147276246s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5hpn [1mSTEP[0m: Checking for subpath error in container status Feb 4 15:09:08.346: INFO: Deleting pod "pod-subpath-test-dynamicpv-5hpn" in namespace "provisioning-9146" Feb 4 15:09:08.417: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-5hpn" to be fully deleted [1mSTEP[0m: Deleting pod Feb 4 15:09:12.560: INFO: Deleting pod "pod-subpath-test-dynamicpv-5hpn" in namespace "provisioning-9146" [1mSTEP[0m: Deleting pvc Feb 4 15:09:12.625: INFO: Deleting PersistentVolumeClaim "efs.csi.aws.com8h5g7" ... skipping 15 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath directory is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [90m------------------------------[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 92 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDriver efs.csi.aws.com doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m ... skipping 350 lines ... [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:09:25.419: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Feb 4 15:09:25.742: INFO: Driver "efs.csi.aws.com" does not provide raw block - skipping [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 15:09:25.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-4089" for this suite. ... skipping 7 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [It][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [36mDriver "efs.csi.aws.com" does not provide raw block - skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113 [90m------------------------------[0m ... skipping 454 lines ... [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244 [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Pre-provisioned PV (block volmode)] volumeMode[0m [1mshould fail to create pod by failing to mount volume [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:197[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 6 lines ... [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:07:10.996: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create pod by failing to mount volume [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:197 [1mSTEP[0m: Creating sc [1mSTEP[0m: Creating pv and pvc Feb 4 15:07:11.747: INFO: Waiting for PV pvmrzwj to bind to PVC pvc-qk4q8 Feb 4 15:07:11.747: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qk4q8] to have phase Bound Feb 4 15:07:11.813: INFO: PersistentVolumeClaim pvc-qk4q8 found but phase is Pending instead of Bound. ... skipping 27 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to create pod by failing to mount volume [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:197[0m [90m------------------------------[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 24 lines ... [36mDriver efs.csi.aws.com doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 6 lines ... [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:09:26.790: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278 Feb 4 15:09:27.305: INFO: Creating resource for dynamic PV Feb 4 15:09:27.305: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(efs.csi.aws.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass [1mSTEP[0m: creating a claim Feb 4 15:09:27.378: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:09:27.445: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comq2stc] to have phase Bound Feb 4 15:09:27.511: INFO: PersistentVolumeClaim efs.csi.aws.comq2stc found but phase is Pending instead of Bound. Feb 4 15:09:29.576: INFO: PersistentVolumeClaim efs.csi.aws.comq2stc found and phase=Bound (2.130341368s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-cqsn [1mSTEP[0m: Checking for subpath error in container status Feb 4 15:09:33.915: INFO: Deleting pod "pod-subpath-test-dynamicpv-cqsn" in namespace "provisioning-1994" Feb 4 15:09:33.985: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-cqsn" to be fully deleted [1mSTEP[0m: Deleting pod Feb 4 15:09:42.116: INFO: Deleting pod "pod-subpath-test-dynamicpv-cqsn" in namespace "provisioning-1994" [1mSTEP[0m: Deleting pvc Feb 4 15:09:42.181: INFO: Deleting PersistentVolumeClaim "efs.csi.aws.comq2stc" ... skipping 14 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [90m------------------------------[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 121 lines ... Feb 4 15:09:36.835: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:09:36.903: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.com94hzg] to have phase Bound Feb 4 15:09:36.970: INFO: PersistentVolumeClaim efs.csi.aws.com94hzg found but phase is Pending instead of Bound. Feb 4 15:09:39.037: INFO: PersistentVolumeClaim efs.csi.aws.com94hzg found and phase=Bound (2.133695436s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-njfs [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:09:39.237: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-njfs" in namespace "provisioning-1038" to be "Succeeded or Failed" Feb 4 15:09:39.303: INFO: Pod "pod-subpath-test-dynamicpv-njfs": Phase="Pending", Reason="", readiness=false. Elapsed: 66.097086ms Feb 4 15:09:41.369: INFO: Pod "pod-subpath-test-dynamicpv-njfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132378282s Feb 4 15:09:43.436: INFO: Pod "pod-subpath-test-dynamicpv-njfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199789459s [1mSTEP[0m: Saw pod success Feb 4 15:09:43.437: INFO: Pod "pod-subpath-test-dynamicpv-njfs" satisfied condition "Succeeded or Failed" Feb 4 15:09:43.502: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-njfs container test-container-volume-dynamicpv-njfs: <nil> [1mSTEP[0m: delete the pod Feb 4 15:09:43.648: INFO: Waiting for pod pod-subpath-test-dynamicpv-njfs to disappear Feb 4 15:09:43.714: INFO: Pod pod-subpath-test-dynamicpv-njfs no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-njfs Feb 4 15:09:43.714: INFO: Deleting pod "pod-subpath-test-dynamicpv-njfs" in namespace "provisioning-1038" ... skipping 47 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDriver efs.csi.aws.com doesn't support InlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 201 lines ... Feb 4 15:09:46.179: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:09:46.247: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.com9lszm] to have phase Bound Feb 4 15:09:46.313: INFO: PersistentVolumeClaim efs.csi.aws.com9lszm found but phase is Pending instead of Bound. Feb 4 15:09:48.379: INFO: PersistentVolumeClaim efs.csi.aws.com9lszm found and phase=Bound (2.132354855s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-bdkn [1mSTEP[0m: Creating a pod to test subpath Feb 4 15:09:48.588: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bdkn" in namespace "provisioning-8187" to be "Succeeded or Failed" Feb 4 15:09:48.654: INFO: Pod "pod-subpath-test-dynamicpv-bdkn": Phase="Pending", Reason="", readiness=false. Elapsed: 66.359205ms Feb 4 15:09:50.721: INFO: Pod "pod-subpath-test-dynamicpv-bdkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.133233487s [1mSTEP[0m: Saw pod success Feb 4 15:09:50.721: INFO: Pod "pod-subpath-test-dynamicpv-bdkn" satisfied condition "Succeeded or Failed" Feb 4 15:09:50.787: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-bdkn container test-container-subpath-dynamicpv-bdkn: <nil> [1mSTEP[0m: delete the pod Feb 4 15:09:50.927: INFO: Waiting for pod pod-subpath-test-dynamicpv-bdkn to disappear Feb 4 15:09:50.992: INFO: Pod pod-subpath-test-dynamicpv-bdkn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-bdkn Feb 4 15:09:50.992: INFO: Deleting pod "pod-subpath-test-dynamicpv-bdkn" in namespace "provisioning-8187" ... skipping 163 lines ... Feb 4 15:09:59.694: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:09:59.762: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comx94s8] to have phase Bound Feb 4 15:09:59.829: INFO: PersistentVolumeClaim efs.csi.aws.comx94s8 found but phase is Pending instead of Bound. Feb 4 15:10:01.911: INFO: PersistentVolumeClaim efs.csi.aws.comx94s8 found and phase=Bound (2.149364099s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-2l6h [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Feb 4 15:10:02.150: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2l6h" in namespace "provisioning-4394" to be "Succeeded or Failed" Feb 4 15:10:02.239: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Pending", Reason="", readiness=false. Elapsed: 89.149247ms Feb 4 15:10:04.305: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155567436s Feb 4 15:10:06.373: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 4.223218732s Feb 4 15:10:08.450: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 6.300613344s Feb 4 15:10:10.518: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 8.368093363s Feb 4 15:10:12.585: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 10.434794331s Feb 4 15:10:14.652: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 12.50235099s Feb 4 15:10:16.719: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 14.569421436s Feb 4 15:10:18.787: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 16.637015818s Feb 4 15:10:20.854: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 18.704258936s Feb 4 15:10:22.921: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Running", Reason="", readiness=true. Elapsed: 20.771413788s Feb 4 15:10:24.988: INFO: Pod "pod-subpath-test-dynamicpv-2l6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.838203071s [1mSTEP[0m: Saw pod success Feb 4 15:10:24.988: INFO: Pod "pod-subpath-test-dynamicpv-2l6h" satisfied condition "Succeeded or Failed" Feb 4 15:10:25.067: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-2l6h container test-container-subpath-dynamicpv-2l6h: <nil> [1mSTEP[0m: delete the pod Feb 4 15:10:25.264: INFO: Waiting for pod pod-subpath-test-dynamicpv-2l6h to disappear Feb 4 15:10:25.329: INFO: Pod pod-subpath-test-dynamicpv-2l6h no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-2l6h Feb 4 15:10:25.329: INFO: Deleting pod "pod-subpath-test-dynamicpv-2l6h" in namespace "provisioning-4394" ... skipping 104 lines ... [1mSTEP[0m: Building a namespace api object, basename efs [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should mount different paths on same volume on same node /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:234 [1mSTEP[0m: Creating efs pvc & pv with no subpath [1mSTEP[0m: Creating pod to make subpaths /a and /b Feb 4 15:09:32.162: INFO: Waiting up to 5m0s for pod "pvc-tester-xpdks" in namespace "efs-2460" to be "Succeeded or Failed" Feb 4 15:09:32.237: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 74.315561ms Feb 4 15:09:34.305: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143048735s Feb 4 15:09:36.376: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213364588s Feb 4 15:09:38.445: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282845669s Feb 4 15:09:40.514: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35176323s Feb 4 15:09:42.583: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 10.421200816s ... skipping 3 lines ... Feb 4 15:09:50.862: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 18.700064029s Feb 4 15:09:52.931: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 20.769073251s Feb 4 15:09:55.000: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 22.838182827s Feb 4 15:09:57.070: INFO: Pod "pvc-tester-xpdks": Phase="Pending", Reason="", readiness=false. Elapsed: 24.907680494s Feb 4 15:09:59.140: INFO: Pod "pvc-tester-xpdks": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.977473365s [1mSTEP[0m: Saw pod success Feb 4 15:09:59.140: INFO: Pod "pvc-tester-xpdks" satisfied condition "Succeeded or Failed" [1mSTEP[0m: Creating efs pvc & pv with subpath /a [1mSTEP[0m: Creating efs pvc & pv with subpath /b [1mSTEP[0m: Creating pod to mount subpaths /a and /b [AfterEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 15:10:31.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ... skipping 150 lines ... [36mFilesystem volume case should be covered by block volume case -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:208 [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:258[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 6 lines ... [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:05:37.396: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:258 [1mSTEP[0m: Creating sc [1mSTEP[0m: Creating pv and pvc Feb 4 15:10:38.297: INFO: Warning: did not get event about provisioing failed [1mSTEP[0m: Deleting pvc Feb 4 15:10:38.434: INFO: Deleting PersistentVolumeClaim "pvc-pvcpl" [1mSTEP[0m: Deleting sc [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 15:10:38.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ... skipping 8 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:258[0m [90m------------------------------[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 325 lines ... [1mSTEP[0m: Building a namespace api object, basename efs [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should mount with option tls when encryptInTransit unset /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:363 [1mSTEP[0m: Creating efs pvc & pv [1mSTEP[0m: Creating pod to mount pvc "efs-9703" and run "mount && mount | grep /mnt/volume1 | grep 127.0.0.1" Feb 4 15:10:41.369: INFO: Waiting up to 5m0s for pod "pvc-tester-48g62" in namespace "efs-9703" to be "Succeeded or Failed" Feb 4 15:10:41.438: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 68.731731ms Feb 4 15:10:43.508: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139078166s Feb 4 15:10:45.578: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208807213s Feb 4 15:10:47.647: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277544021s Feb 4 15:10:49.716: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346419742s Feb 4 15:10:51.784: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 10.41524531s ... skipping 8 lines ... Feb 4 15:11:10.410: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 29.041099928s Feb 4 15:11:12.480: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 31.111062532s Feb 4 15:11:14.552: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 33.18323345s Feb 4 15:11:16.622: INFO: Pod "pvc-tester-48g62": Phase="Pending", Reason="", readiness=false. Elapsed: 35.252528578s Feb 4 15:11:18.692: INFO: Pod "pvc-tester-48g62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.322688327s [1mSTEP[0m: Saw pod success Feb 4 15:11:18.692: INFO: Pod "pvc-tester-48g62" satisfied condition "Succeeded or Failed" Feb 4 15:11:18.762: INFO: pod "pvc-tester-48g62" logs: overlay on / type overlay (rw,relatime,lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/53/fs,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/146/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/146/work) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) ... skipping 168 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244 [90m------------------------------[0m ... skipping 249 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244 [90m------------------------------[0m ... skipping 28 lines ... [36mDriver efs.csi.aws.com doesn't support InlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [0m[efs-csi] EFS CSI[0m [90m[Driver: efs.csi.aws.com][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 6 lines ... [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:11:22.739: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267 Feb 4 15:11:23.284: INFO: Creating resource for dynamic PV Feb 4 15:11:23.284: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(efs.csi.aws.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass [1mSTEP[0m: creating a claim Feb 4 15:11:23.352: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:11:23.423: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.comqjggl] to have phase Bound Feb 4 15:11:23.493: INFO: PersistentVolumeClaim efs.csi.aws.comqjggl found but phase is Pending instead of Bound. Feb 4 15:11:25.565: INFO: PersistentVolumeClaim efs.csi.aws.comqjggl found and phase=Bound (2.141401178s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xwr4 [1mSTEP[0m: Checking for subpath error in container status Feb 4 15:11:29.910: INFO: Deleting pod "pod-subpath-test-dynamicpv-xwr4" in namespace "provisioning-9757" Feb 4 15:11:29.981: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-xwr4" to be fully deleted [1mSTEP[0m: Deleting pod Feb 4 15:11:36.119: INFO: Deleting pod "pod-subpath-test-dynamicpv-xwr4" in namespace "provisioning-9757" [1mSTEP[0m: Deleting pvc Feb 4 15:11:36.188: INFO: Deleting PersistentVolumeClaim "efs.csi.aws.comqjggl" ... skipping 14 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [90m------------------------------[0m [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:220 [BeforeEach] [efs-csi] EFS CSI /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 39 lines ... [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Feb 4 15:11:37.518: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-20151.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Feb 4 15:11:37.861: INFO: Driver "efs.csi.aws.com" does not provide raw block - skipping [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 15:11:37.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-9271" for this suite. ... skipping 7 lines ... [efs-csi] EFS CSI [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:219[0m [Driver: efs.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:227[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [It][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [36mDriver "efs.csi.aws.com" does not provide raw block - skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113 [90m------------------------------[0m ... skipping 237 lines ... Feb 4 15:11:34.639: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:11:34.717: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.com7cd7p] to have phase Bound Feb 4 15:11:34.786: INFO: PersistentVolumeClaim efs.csi.aws.com7cd7p found but phase is Pending instead of Bound. Feb 4 15:11:36.855: INFO: PersistentVolumeClaim efs.csi.aws.com7cd7p found and phase=Bound (2.137693436s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-lclk [1mSTEP[0m: Creating a pod to test exec-volume-test Feb 4 15:11:37.085: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-lclk" in namespace "volume-5235" to be "Succeeded or Failed" Feb 4 15:11:37.152: INFO: Pod "exec-volume-test-dynamicpv-lclk": Phase="Pending", Reason="", readiness=false. Elapsed: 67.357265ms Feb 4 15:11:39.221: INFO: Pod "exec-volume-test-dynamicpv-lclk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.135776894s [1mSTEP[0m: Saw pod success Feb 4 15:11:39.221: INFO: Pod "exec-volume-test-dynamicpv-lclk" satisfied condition "Succeeded or Failed" Feb 4 15:11:39.289: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod exec-volume-test-dynamicpv-lclk container exec-container-dynamicpv-lclk: <nil> [1mSTEP[0m: delete the pod Feb 4 15:11:39.432: INFO: Waiting for pod exec-volume-test-dynamicpv-lclk to disappear Feb 4 15:11:39.500: INFO: Pod exec-volume-test-dynamicpv-lclk no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-lclk Feb 4 15:11:39.500: INFO: Deleting pod "exec-volume-test-dynamicpv-lclk" in namespace "volume-5235" ... skipping 480 lines ... Feb 4 15:11:42.307: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Feb 4 15:11:42.374: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [efs.csi.aws.com287nj] to have phase Bound Feb 4 15:11:42.440: INFO: PersistentVolumeClaim efs.csi.aws.com287nj found but phase is Pending instead of Bound. Feb 4 15:11:44.510: INFO: PersistentVolumeClaim efs.csi.aws.com287nj found and phase=Bound (2.135898848s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ftfl [1mSTEP[0m: Creating a pod to test multi_subpath Feb 4 15:11:44.713: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ftfl" in namespace "provisioning-248" to be "Succeeded or Failed" Feb 4 15:11:44.779: INFO: Pod "pod-subpath-test-dynamicpv-ftfl": Phase="Pending", Reason="", readiness=false. Elapsed: 65.238524ms Feb 4 15:11:46.845: INFO: Pod "pod-subpath-test-dynamicpv-ftfl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131211602s Feb 4 15:11:48.910: INFO: Pod "pod-subpath-test-dynamicpv-ftfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196962133s [1mSTEP[0m: Saw pod success Feb 4 15:11:48.911: INFO: Pod "pod-subpath-test-dynamicpv-ftfl" satisfied condition "Succeeded or Failed" Feb 4 15:11:48.976: INFO: Trying to get logs from node ip-172-20-37-220.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-ftfl container test-container-subpath-dynamicpv-ftfl: <nil> [1mSTEP[0m: delete the pod Feb 4 15:11:49.118: INFO: Waiting for pod pod-subpath-test-dynamicpv-ftfl to disappear Feb 4 15:11:49.183: INFO: Pod pod-subpath-test-dynamicpv-ftfl no longer exists [1mSTEP[0m: Deleting pod Feb 4 15:11:49.183: INFO: Deleting pod "pod-subpath-test-dynamicpv-ftfl" in namespace "provisioning-248" ... skipping 109 lines ... [1mSTEP[0m: Deleted EFS filesystem "fs-041c0971c1ac6d6b8" [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90m[efs-csi] EFS CSI [0m[0m[Driver: efs.csi.aws.com] [0m[91m[1m[It] should create a directory with the correct permissions when in directory provisioning mode [0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-efs-csi-driver/test/e2e/e2e.go:426[0m [1m[91mRan 37 of 190 Specs in 565.753 seconds[0m [1m[91mFAIL![0m -- [32m[1m36 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m153 Skipped[0m Ginkgo ran 1 suite in 10m34.76149469s Test Suite Failed + TEST_PASSED=1 + set -e + set +x ### ## TEST_PASSED: 1 # ... skipping 814 lines ... I0204 15:02:07.728931 1 node.go:306] NodeGetInfo: called with args I0204 15:05:00.048049 1 node.go:290] NodeGetCapabilities: called with args I0204 15:05:00.053819 1 node.go:290] NodeGetCapabilities: called with args I0204 15:05:00.055388 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8::fsap-0682297c29b3d9a8a" target_path:"/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1675522928947-8081-efs.csi.aws.com" > I0204 15:05:00.055468 1 node.go:178] NodePublishVolume: creating dir /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount I0204 15:05:00.055539 1 node.go:183] NodePublishVolume: mounting fs-041c0971c1ac6d6b8:/ at /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount with options [accesspoint=fsap-0682297c29b3d9a8a tls] E0204 15:05:00.520867 1 mount_linux.go:184] Mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. E0204 15:05:00.521299 1 driver.go:97] GRPC error: rpc error: code = Internal desc = Could not mount "fs-041c0971c1ac6d6b8:/" at "/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. I0204 15:05:01.049478 1 node.go:290] NodeGetCapabilities: called with args I0204 15:05:01.050685 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8::fsap-0682297c29b3d9a8a" target_path:"/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1675522928947-8081-efs.csi.aws.com" > I0204 15:05:01.050751 1 node.go:178] NodePublishVolume: creating dir /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount I0204 15:05:01.050911 1 node.go:183] NodePublishVolume: mounting fs-041c0971c1ac6d6b8:/ at /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount with options [accesspoint=fsap-0682297c29b3d9a8a tls] E0204 15:05:01.457603 1 mount_linux.go:184] Mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. E0204 15:05:01.458107 1 driver.go:97] GRPC error: rpc error: code = Internal desc = Could not mount "fs-041c0971c1ac6d6b8:/" at "/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. I0204 15:05:02.554131 1 node.go:290] NodeGetCapabilities: called with args I0204 15:05:02.555573 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8::fsap-0682297c29b3d9a8a" target_path:"/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1675522928947-8081-efs.csi.aws.com" > I0204 15:05:02.555639 1 node.go:178] NodePublishVolume: creating dir /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount I0204 15:05:02.555701 1 node.go:183] NodePublishVolume: mounting fs-041c0971c1ac6d6b8:/ at /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount with options [accesspoint=fsap-0682297c29b3d9a8a tls] E0204 15:05:02.974891 1 mount_linux.go:184] Mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. E0204 15:05:02.975242 1 driver.go:97] GRPC error: rpc error: code = Internal desc = Could not mount "fs-041c0971c1ac6d6b8:/" at "/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. I0204 15:05:05.056767 1 node.go:290] NodeGetCapabilities: called with args I0204 15:05:05.058036 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8::fsap-0682297c29b3d9a8a" target_path:"/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1675522928947-8081-efs.csi.aws.com" > I0204 15:05:05.058233 1 node.go:178] NodePublishVolume: creating dir /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount I0204 15:05:05.058346 1 node.go:183] NodePublishVolume: mounting fs-041c0971c1ac6d6b8:/ at /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount with options [accesspoint=fsap-0682297c29b3d9a8a tls] E0204 15:05:05.499456 1 mount_linux.go:184] Mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. E0204 15:05:05.499567 1 driver.go:97] GRPC error: rpc error: code = Internal desc = Could not mount "fs-041c0971c1ac6d6b8:/" at "/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs -o accesspoint=fsap-0682297c29b3d9a8a,tls fs-041c0971c1ac6d6b8:/ /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount Output: Failed to resolve "fs-041c0971c1ac6d6b8.efs.us-west-2.amazonaws.com". The file system mount target ip address cannot be found, please pass mount target ip address via mount options. User: arn:aws:sts::607362164682:assumed-role/nodes.test-cluster-20151.k8s.local/i-04ca571056ad388f3 is not authorized to perform: elasticfilesystem:DescribeMountTargets on the specified resource Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False].Warning: config file does not have fips_mode_enabled item in section mount.. You should be able to find a new config file in the same folder as current config file /etc/amazon/efs/efs-utils.conf. Consider update the new config file to latest config file. Use the default value [fips_mode_enabled = False]. I0204 15:05:09.566754 1 node.go:290] NodeGetCapabilities: called with args I0204 15:05:09.567933 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8::fsap-0682297c29b3d9a8a" target_path:"/var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1675522928947-8081-efs.csi.aws.com" > I0204 15:05:09.568027 1 node.go:178] NodePublishVolume: creating dir /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount I0204 15:05:09.568102 1 node.go:183] NodePublishVolume: mounting fs-041c0971c1ac6d6b8:/ at /var/lib/kubelet/pods/ae0ae89f-e524-46a3-bf24-d4535ff56f00/volumes/kubernetes.io~csi/pvc-d640335f-e9f8-426b-95cb-71e354d2ec1d/mount with options [accesspoint=fsap-0682297c29b3d9a8a tls] ... skipping 24 lines ... I0204 15:06:05.433719 1 node.go:290] NodeGetCapabilities: called with args I0204 15:06:17.025107 1 reaper.go:107] reaper: waited for process &{42 1 90 42 42 stunnel5} I0204 15:07:01.123062 1 reaper.go:107] reaper: waited for process &{80 1 90 80 80 stunnel5} I0204 15:07:01.127500 1 reaper.go:107] reaper: waited for process &{74 1 90 74 74 stunnel5} I0204 15:07:26.825365 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:26.828692 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:26.828751 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:07:27.424151 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:27.425198 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:27.425258 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:07:28.427287 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:28.428619 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:28.428671 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:07:30.435075 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:30.436263 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:30.436309 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:07:34.447764 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:34.448755 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:34.448810 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:07:42.471011 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:42.472144 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:42.472194 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:07:58.518322 1 node.go:290] NodeGetCapabilities: called with args I0204 15:07:58.520265 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:07:58.520320 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:08:30.621025 1 node.go:290] NodeGetCapabilities: called with args I0204 15:08:30.622128 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:08:30.622194 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:09:34.710976 1 node.go:290] NodeGetCapabilities: called with args I0204 15:09:34.712144 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvmrzwj" target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvmrzwj/37f9b118-b202-4a3c-b171-dfe370e6efeb" volume_capability:<block:<> access_mode:<mode:SINGLE_NODE_WRITER > > E0204 15:09:34.712204 1 driver.go:97] GRPC error: rpc error: code = InvalidArgument desc = Volume capability not supported: only filesystem volumes are supported I0204 15:10:37.477869 1 node.go:290] NodeGetCapabilities: called with args I0204 15:10:37.482435 1 node.go:290] NodeGetCapabilities: called with args I0204 15:10:37.483510 1 node.go:52] NodePublishVolume: called with args volume_id:"fs-041c0971c1ac6d6b8::fsap-0d57b4bb3ec833acf" target_path:"/var/lib/kubelet/pods/5a101859-9f26-4168-9539-bf153162d062/volumes/kubernetes.io~csi/pvc-585e93b1-5daf-4231-84c1-180cb28c88e1/mount" volume_capability:<mount:<> access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1675522928947-8081-efs.csi.aws.com" > I0204 15:10:37.483590 1 node.go:178] NodePublishVolume: creating dir /var/lib/kubelet/pods/5a101859-9f26-4168-9539-bf153162d062/volumes/kubernetes.io~csi/pvc-585e93b1-5daf-4231-84c1-180cb28c88e1/mount I0204 15:10:37.483657 1 node.go:183] NodePublishVolume: mounting fs-041c0971c1ac6d6b8:/ at /var/lib/kubelet/pods/5a101859-9f26-4168-9539-bf153162d062/volumes/kubernetes.io~csi/pvc-585e93b1-5daf-4231-84c1-180cb28c88e1/mount with options [accesspoint=fsap-0d57b4bb3ec833acf tls] I0204 15:10:37.797156 1 node.go:188] NodePublishVolume: /var/lib/kubelet/pods/5a101859-9f26-4168-9539-bf153162d062/volumes/kubernetes.io~csi/pvc-585e93b1-5daf-4231-84c1-180cb28c88e1/mount was mounted ... skipping 824 lines ... volume:vol-01b36e603b7854c51 subnet:subnet-05fb4f30510dcfad7 volume:vol-025bd6cbec40cd7ce security-group:sg-00056d6d2975d3694 internet-gateway:igw-0af90fa80e3be2ce2 dhcp-options:dopt-0aa3e3f8c057379e2 {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2023-02-04T15:15:30Z"} ++ early_exit_handler ++ '[' -n 165 ']' ++ kill -TERM 165 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 5 lines ...