This job view page is being replaced by Spyglass soon. Check out the new job view.
PRleakingtapan: Add e2e tests
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-10-11 06:10
Elapsed33m35s
Revisionda5f1420f89823061f3c7bf6656da46f2f817ecb
Refs 103

No Test Failures!


Error lines from build-log.txt

... skipping 2260 lines ...

Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-9670-k8s-tcdcnn-1311655192.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 7 lines ...
KIND	NAME			MESSAGE
Machine	i-013edd97f4095e82b	machine "i-013edd97f4095e82b" has not yet joined cluster
Machine	i-06689ecabba3eda16	machine "i-06689ecabba3eda16" has not yet joined cluster
Machine	i-09a113631f2699e9d	machine "i-09a113631f2699e9d" has not yet joined cluster
Machine	i-0fc55c3dc3916f40e	machine "i-0fc55c3dc3916f40e" has not yet joined cluster

Validation Failed
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 12 lines ...
Node	ip-172-20-47-113.us-west-2.compute.internal		master "ip-172-20-47-113.us-west-2.compute.internal" is not ready
Pod	kube-system/aws-node-grpsj				kube-system pod "aws-node-grpsj" is pending
Pod	kube-system/dns-controller-56455df565-8g8qs		kube-system pod "dns-controller-56455df565-8g8qs" is pending
Pod	kube-system/kube-dns-66b6848cf6-4w2bd			kube-system pod "kube-dns-66b6848cf6-4w2bd" is pending
Pod	kube-system/kube-dns-autoscaler-577b4774b5-78w22	kube-system pod "kube-dns-autoscaler-577b4774b5-78w22" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 12 lines ...
Pod	kube-system/dns-controller-56455df565-8g8qs					kube-system pod "dns-controller-56455df565-8g8qs" is pending
Pod	kube-system/etcd-manager-events-ip-172-20-47-113.us-west-2.compute.internal	kube-system pod "etcd-manager-events-ip-172-20-47-113.us-west-2.compute.internal" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-47-113.us-west-2.compute.internal	kube-system pod "kube-controller-manager-ip-172-20-47-113.us-west-2.compute.internal" is pending
Pod	kube-system/kube-dns-66b6848cf6-4w2bd						kube-system pod "kube-dns-66b6848cf6-4w2bd" is pending
Pod	kube-system/kube-dns-autoscaler-577b4774b5-78w22				kube-system pod "kube-dns-autoscaler-577b4774b5-78w22" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 8 lines ...
ip-172-20-61-72.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/kube-dns-66b6848cf6-4w2bd	kube-system pod "kube-dns-66b6848cf6-4w2bd" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-9670.k8s.local

Validating cluster test-cluster-9670.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 333 lines ...
[AfterEach] [fsx-csi-e2e] Dynamic Provisioning
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.16.1/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "fsx-8081".
STEP: Found 3 events.
Oct 11 06:29:59.641: INFO: At 2019-10-11 06:24:59 +0000 UTC - event for pvc-9cqxt: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "fsx.csi.aws.com" or manually created by system administrator
Oct 11 06:29:59.641: INFO: At 2019-10-11 06:24:59 +0000 UTC - event for pvc-9cqxt: {fsx.csi.aws.com_fsx-csi-controller-0_ad4365ca-ebef-11e9-8896-267a1e4481a2 } Provisioning: External provisioner is provisioning volume for claim "fsx-8081/pvc-9cqxt"
Oct 11 06:29:59.641: INFO: At 2019-10-11 06:29:59 +0000 UTC - event for pvc-9cqxt: {fsx.csi.aws.com_fsx-csi-controller-0_ad4365ca-ebef-11e9-8896-267a1e4481a2 } ProvisioningFailed: failed to provision volume with StorageClass "fsx-8081-fsx.csi.aws.com-dynamic-sc-t4llv": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Oct 11 06:29:59.697: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 11 06:29:59.697: INFO: 
Oct 11 06:29:59.865: INFO: 
Logging node info for node ip-172-20-47-113.us-west-2.compute.internal
Oct 11 06:29:59.923: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-47-113.us-west-2.compute.internal   /api/v1/nodes/ip-172-20-47-113.us-west-2.compute.internal ffab7c76-87d4-4d7f-ac9a-7e886de16448 1264 0 2019-10-11 06:21:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:m3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2a kops.k8s.io/instancegroup:master-us-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-47-113.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:172.20.128.0/24,DoNotUse_ExternalID:,ProviderID:aws:///us-west-2a/i-09a113631f2699e9d,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{39 0} {<nil>} 39 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{64351657984 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949539328 0} {<nil>} 3856972Ki BinarySI},pods: {{12 0} {<nil>} 12 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{39 0} {<nil>} 39 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{57916492090 0} {<nil>} 57916492090 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3844681728 0} {<nil>} 3754572Ki BinarySI},pods: {{12 0} {<nil>} 12 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-10-11 06:29:30 +0000 UTC,LastTransitionTime:2019-10-11 06:21:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-10-11 06:29:30 +0000 UTC,LastTransitionTime:2019-10-11 06:21:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-10-11 06:29:30 +0000 UTC,LastTransitionTime:2019-10-11 06:21:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-10-11 06:29:30 +0000 UTC,LastTransitionTime:2019-10-11 06:21:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.113,},NodeAddress{Type:ExternalIP,Address:54.188.118.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-47-113.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-188-118-200.us-west-2.compute.amazonaws.com,},NodeAddress{Type:Hostname,Address:ip-172-20-47-113.us-west-2.compute.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c787cbd5a510488a8f4f6b7d02369b84,SystemUUID:EC2DD4EC-5B4B-9273-36B2-34299D588258,BootID:a2cac26d-5374-401d-a8d3-b97699c3e345,KernelVersion:4.9.0-9-amd64,OSImage:Debian GNU/Linux 9 (stretch),ContainerRuntimeVersion:docker://18.6.3,KubeletVersion:v1.15.3,KubeProxyVersion:v1.15.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[kopeio/etcd-manager@sha256:a9b0df49218a65b9d863a7f7611aa5cff88e9a3f69d0eeff12ac7d6512d6064c kopeio/etcd-manager:3.0.20190328],SizeBytes:556256149,},ContainerImage{Names:[protokube:1.14.0-alpha.1],SizeBytes:293908354,},ContainerImage{Names:[602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni@sha256:0d0f4deb0236bc2d08e1638754cdf6387d4071ee0112d6cc5846f307128229ad 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:1.3.3],SizeBytes:250524732,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:120c31707be05d6ff5bd05e56e95cac09cdb75e3b533b91fd2c6a2b771c19609 k8s.gcr.io/kube-apiserver:v1.15.3],SizeBytes:206843838,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0bf6211a0d8cb1c444aa3148941ae4dfbb43dfbbd2a7a9177a9594535fbed838 k8s.gcr.io/kube-controller-manager:v1.15.3],SizeBytes:158743102,},ContainerImage{Names:[kope/dns-controller@sha256:8aac4f9261884452cc486da2a2813517425e3aeef34a24d223259429a12d6d50 kope/dns-controller:1.14.0-alpha.1],SizeBytes:124703488,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f910100972afda5b14037ccbba0cd6aa091bb773ae749f46b03f395380935c9 k8s.gcr.io/kube-proxy:v1.15.3],SizeBytes:82408284,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:e365d380e57c75ee35f7cda99df5aa8c96e86287a5d3b52847e5d67d27ed082a k8s.gcr.io/kube-scheduler:v1.15.3],SizeBytes:81107582,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 k8s.gcr.io/pause-amd64:3.0],SizeBytes:746888,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 11 06:29:59.925: INFO: 
... skipping 90 lines ...
• Failure [319.783 seconds]
[fsx-csi-e2e] Dynamic Provisioning
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/dynamic_provisioning_test.go:30
  should create a volume on demand with flock mount option [It]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/dynamic_provisioning_test.go:80

  Unexpected error:
      <*errors.errorString | 0xc0004957e0>: {
          s: "PersistentVolumeClaims [pvc-9cqxt] not all in phase Bound within 5m0s",
      }
      PersistentVolumeClaims [pvc-9cqxt] not all in phase Bound within 5m0s
  occurred

... skipping 317 lines ...
rip    0x460c21
rflags 0x286
cs     0x33
fs     0x0
gs     0x0
*** Test killed with quit: ran too long (10m0s).
FAIL	github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e	600.010s
Removing driver
daemonset.apps "fsx-csi-node" deleted
serviceaccount "fsx-csi-controller-sa" deleted
clusterrole.rbac.authorization.k8s.io "fsx-csi-external-provisioner-role" deleted
clusterrolebinding.rbac.authorization.k8s.io "fsx-csi-external-provisioner-binding" deleted
statefulset.apps "fsx-csi-controller" deleted
... skipping 505 lines ...
	vpc:vpc-06dfa4ba4054955cc
	route-table:rtb-053e670363ee201cc
	subnet:subnet-07f873f10ca536867
	dhcp-options:dopt-064d3742b81029d13

not making progress deleting resources; giving up
2019/10/11 06:43:50 Failed to run tear down step: exit status 1
2019/10/11 06:43:50 exit status 1
exit status 1
Makefile:42: recipe for target 'test-e2e' failed
make: *** [test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/10/11 06:43:50 Cleaning up Docker data root...
[Barnacle] 2019/10/11 06:43:50 Removing all containers.
... skipping 25 lines ...