This job view page is being replaced by Spyglass soon. Check out the new job view.
PRwongma7: Helm chart 1.0
ResultFAILURE
Tests 1 failed / 23 succeeded
Started2021-07-16 23:05
Elapsed1h14m
Revision8355ae37c6f306d92f0848fd28e66e9cb12ccf87
Refs 194

Test Failures


AWS FSx CSI Driver End-to-End Tests [fsx-csi-e2e] Dynamic Provisioning with s3 data repository should create a volume on demand with s3 as data repository 10m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AWS\sFSx\sCSI\sDriver\sEnd\-to\-End\sTests\s\[fsx\-csi\-e2e\]\sDynamic\sProvisioning\swith\ss3\sdata\srepository\sshould\screate\sa\svolume\son\sdemand\swith\ss3\sas\sdata\srepository$'
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/dynamic_provisioning_test.go:151
Unexpected error:
    <*errors.errorString | 0xc0007d2020>: {
        s: "PersistentVolumeClaims [pvc-2vfvs] not all in phase Bound within 10m0s",
    }
    PersistentVolumeClaims [pvc-2vfvs] not all in phase Bound within 10m0s
occurred
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/testsuites/testsuites.go:155
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Show 23 Passed Tests

Show 126 Skipped Tests

Error lines from build-log.txt

... skipping 324 lines ...
  Installing : 7:device-mapper-libs-1.02.146-4.amzn2.0.2.x86_64           21/29 
  Installing : cryptsetup-libs-1.7.4-4.amzn2.x86_64                       22/29 
  Installing : elfutils-libs-0.176-2.amzn2.x86_64                         23/29 
  Installing : systemd-libs-219-78.amzn2.0.14.x86_64                      24/29 
  Installing : 1:dbus-libs-1.10.24-7.amzn2.x86_64                         25/29 
  Installing : systemd-219-78.amzn2.0.14.x86_64                           26/29 
Failed to get D-Bus connection: Operation not permitted
  Installing : elfutils-default-yama-scope-0.176-2.amzn2.noarch           27/29 
  Installing : 1:dbus-1.10.24-7.amzn2.x86_64                              28/29 
  Installing : libyaml-0.1.4-11.amzn2.0.2.x86_64                          29/29 
  Verifying  : gzip-1.5-10.amzn2.x86_64                                    1/29 
  Verifying  : elfutils-default-yama-scope-0.176-2.amzn2.noarch            2/29 
  Verifying  : cracklib-2.9.0-11.amzn2.0.2.x86_64                          3/29 
... skipping 604 lines ...
## Validating cluster test-cluster-16668.k8s.local
#
Using cluster from kubectl context: test-cluster-16668.k8s.local

Validating cluster test-cluster-16668.k8s.local

W0716 23:12:45.077552   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0716 23:12:55.107825   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0716 23:13:05.144782   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0716 23:13:15.188179   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0716 23:13:25.217454   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16668-k8-8s8gpm-827636806.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0716 23:13:46.886260   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0716 23:14:08.500594   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0716 23:14:30.094987   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0716 23:14:51.670419   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0716 23:15:13.278417   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0716 23:15:34.870007   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0716 23:15:56.563075   17369 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

NODE STATUS
... skipping 8 lines ...
Pod	kube-system/etcd-manager-main-ip-172-20-36-183.us-west-2.compute.internal	system-cluster-critical pod "etcd-manager-main-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-apiserver-ip-172-20-36-183.us-west-2.compute.internal		system-cluster-critical pod "kube-apiserver-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-36-183.us-west-2.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-36-183.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-scheduler-ip-172-20-36-183.us-west-2.compute.internal		system-cluster-critical pod "kube-scheduler-ip-172-20-36-183.us-west-2.compute.internal" is pending

Validation Failed
W0716 23:16:18.520013   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 12 lines ...
Pod	kube-system/kops-controller-p6jbh						system-node-critical pod "kops-controller-p6jbh" is pending
Pod	kube-system/kube-apiserver-ip-172-20-36-183.us-west-2.compute.internal		system-cluster-critical pod "kube-apiserver-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-36-183.us-west-2.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-36-183.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-36-183.us-west-2.compute.internal" is pending
Pod	kube-system/kube-scheduler-ip-172-20-36-183.us-west-2.compute.internal		system-cluster-critical pod "kube-scheduler-ip-172-20-36-183.us-west-2.compute.internal" is pending

Validation Failed
W0716 23:16:29.784458   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 6 lines ...
Machine	i-0d14192438ad433fa				machine "i-0d14192438ad433fa" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-cxmzl		system-cluster-critical pod "coredns-5489b75945-cxmzl" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-ft29r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ft29r" is pending
Pod	kube-system/dns-controller-8574dcc89d-qh8t6	system-cluster-critical pod "dns-controller-8574dcc89d-qh8t6" is pending
Pod	kube-system/kops-controller-p6jbh		system-node-critical pod "kops-controller-p6jbh" is pending

Validation Failed
W0716 23:16:40.913053   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 6 lines ...
Machine	i-0d14192438ad433fa				machine "i-0d14192438ad433fa" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-cxmzl		system-cluster-critical pod "coredns-5489b75945-cxmzl" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-ft29r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ft29r" is pending
Pod	kube-system/dns-controller-8574dcc89d-qh8t6	system-cluster-critical pod "dns-controller-8574dcc89d-qh8t6" is pending
Pod	kube-system/kops-controller-p6jbh		system-node-critical pod "kops-controller-p6jbh" is pending

Validation Failed
W0716 23:16:52.087708   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 4 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Machine	i-0d14192438ad433fa				machine "i-0d14192438ad433fa" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-cxmzl		system-cluster-critical pod "coredns-5489b75945-cxmzl" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-ft29r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ft29r" is pending

Validation Failed
W0716 23:17:03.281472   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 6 lines ...
KIND	NAME									MESSAGE
Node	ip-172-20-57-231.us-west-2.compute.internal				node "ip-172-20-57-231.us-west-2.compute.internal" is not ready
Pod	kube-system/coredns-5489b75945-cxmzl					system-cluster-critical pod "coredns-5489b75945-cxmzl" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-ft29r				system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ft29r" is pending
Pod	kube-system/kube-proxy-ip-172-20-57-231.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-57-231.us-west-2.compute.internal" is pending

Validation Failed
W0716 23:17:14.381213   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 4 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/coredns-5489b75945-cxmzl		system-cluster-critical pod "coredns-5489b75945-cxmzl" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-ft29r	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ft29r" is pending

Validation Failed
W0716 23:17:25.576063   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 4 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5489b75945-9swxh	system-cluster-critical pod "coredns-5489b75945-9swxh" is not ready (coredns)
Pod	kube-system/coredns-5489b75945-cxmzl	system-cluster-critical pod "coredns-5489b75945-cxmzl" is not ready (coredns)

Validation Failed
W0716 23:17:36.802937   17369 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	1	1	us-west-2a

... skipping 177 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail in binding dynamic provisioned PV to PVC [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:248

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 209 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:286

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 75 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:286

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 55 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:251

      Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 63 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:273

      Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 215 lines ...
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:189
------------------------------
FSx CSI Driver Conformance [Driver: fsx.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:251

[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:22:49.777: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
Jul 16 23:22:50.199: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:251
STEP: PrepareTest
Jul 16 23:22:50.318: INFO: Test running for native CSI Driver, not checking metrics
Jul 16 23:29:22.171: INFO: Creating resource for pre-provisioned PV
Jul 16 23:29:22.171: INFO: Creating PVC and PV
STEP: Creating a PVC followed by a PV
... skipping 8 lines ...
Jul 16 23:29:34.754: INFO: PersistentVolumeClaim pvc-4bfzw found but phase is Pending instead of Bound.
Jul 16 23:29:36.814: INFO: PersistentVolumeClaim pvc-4bfzw found but phase is Pending instead of Bound.
Jul 16 23:29:38.874: INFO: PersistentVolumeClaim pvc-4bfzw found and phase=Bound (16.560162886s)
Jul 16 23:29:38.875: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-gkqdb to have phase Bound
Jul 16 23:29:38.934: INFO: PersistentVolume fsx.csi.aws.com-gkqdb found and phase=Bound (59.670944ms)
STEP: Creating pod pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-rf44
STEP: Checking for subpath error in container status
Jul 16 23:29:43.237: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-rf44" in namespace "provisioning-8447"
Jul 16 23:29:43.300: INFO: Wait up to 5m0s for pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-rf44" to be fully deleted
STEP: Deleting pod
Jul 16 23:29:53.422: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-rf44" in namespace "provisioning-8447"
STEP: Deleting pv and pvc
Jul 16 23:29:53.482: INFO: Deleting PersistentVolumeClaim "pvc-4bfzw"
... skipping 9 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath file is outside the volume [Slow][LinuxOnly]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:251
------------------------------
[BeforeEach] [Testpattern: Inline-volume (xfs)][Slow] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
Jul 16 23:29:53.856: INFO: Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes
... skipping 441 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:251

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 24 lines ...
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:30:42.968: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:286
Jul 16 23:30:43.231: INFO: Driver "fsx.csi.aws.com" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:152
Jul 16 23:30:43.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-9628" for this suite.
... skipping 3 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail to use a volume in a pod with mismatched mode [Slow] [It]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:286

      Driver "fsx.csi.aws.com" does not provide raw block - skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumes.go:99
------------------------------
... skipping 63 lines ...

      Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
FSx CSI Driver Conformance [Driver: fsx.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:235

[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:29:17.106: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:235
STEP: PrepareTest
Jul 16 23:29:17.408: INFO: Test running for native CSI Driver, not checking metrics
Jul 16 23:36:48.752: INFO: Creating resource for pre-provisioned PV
Jul 16 23:36:48.752: INFO: Creating PVC and PV
STEP: Creating a PVC followed by a PV
... skipping 2 lines ...
Jul 16 23:36:48.966: INFO: PersistentVolumeClaim pvc-9cp7b found but phase is Pending instead of Bound.
Jul 16 23:36:51.040: INFO: PersistentVolumeClaim pvc-9cp7b found but phase is Pending instead of Bound.
Jul 16 23:36:53.109: INFO: PersistentVolumeClaim pvc-9cp7b found and phase=Bound (4.211518809s)
Jul 16 23:36:53.109: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-hzx4s to have phase Bound
Jul 16 23:36:53.177: INFO: PersistentVolume fsx.csi.aws.com-hzx4s found and phase=Bound (68.464451ms)
STEP: Creating pod pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-9h9z
STEP: Checking for subpath error in container status
Jul 16 23:36:55.522: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-9h9z" in namespace "provisioning-4426"
Jul 16 23:36:55.592: INFO: Wait up to 5m0s for pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-9h9z" to be fully deleted
STEP: Deleting pod
Jul 16 23:37:03.730: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-9h9z" in namespace "provisioning-4426"
STEP: Deleting pv and pvc
Jul 16 23:37:03.799: INFO: Deleting PersistentVolumeClaim "pvc-9cp7b"
... skipping 9 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath directory is outside the volume [Slow]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:235
------------------------------
FSx CSI Driver Conformance [Driver: fsx.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/multivolume.go:293

... skipping 589 lines ...
[AfterEach] [fsx-csi-e2e] Dynamic Provisioning with s3 data repository
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "fsx-3507".
STEP: Found 3 events.
Jul 16 23:39:45.731: INFO: At 2021-07-16 23:29:44 +0000 UTC - event for pvc-2vfvs: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "fsx.csi.aws.com" or manually created by system administrator
Jul 16 23:39:45.731: INFO: At 2021-07-16 23:29:44 +0000 UTC - event for pvc-2vfvs: {fsx.csi.aws.com_ip-172-20-57-231.us-west-2.compute.internal_1e872b89-f08d-457d-9b73-8764fefb2ca4 } Provisioning: External provisioner is provisioning volume for claim "fsx-3507/pvc-2vfvs"
Jul 16 23:39:45.731: INFO: At 2021-07-16 23:34:44 +0000 UTC - event for pvc-2vfvs: {fsx.csi.aws.com_ip-172-20-57-231.us-west-2.compute.internal_1e872b89-f08d-457d-9b73-8764fefb2ca4 } ProvisioningFailed: failed to provision volume with StorageClass "fsx-3507-fsx.csi.aws.com-dynamic-sc-8qvfz": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 16 23:39:45.801: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul 16 23:39:45.801: INFO: 
Jul 16 23:39:45.872: INFO: 
Logging node info for node ip-172-20-36-183.us-west-2.compute.internal
Jul 16 23:39:45.942: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-183.us-west-2.compute.internal    2f228477-0e1a-454c-9983-84f3cd323eea 3601 0 2021-07-16 23:16:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2a kops.k8s.io/instancegroup:master-us-west-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-183.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:us-west-2 topology.kubernetes.io/zone:us-west-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-07-16 23:16:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 105 110 115 116 97 110 99 101 45 116 121 112 101 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 102 97 105 108 117 114 101 45 100 111 109 97 105 110 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 103 105 111 110 34 58 123 125 44 34 102 58 102 97 105 108 117 114 101 45 100 111 109 97 105 110 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 122 111 110 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 105 110 115 116 97 110 99 101 45 116 121 112 101 34 58 123 125 44 34 102 58 116 111 112 111 108 111 103 121 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 103 105 111 110 34 58 123 125 44 34 102 58 116 111 112 111 108 111 103 121 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 122 111 110 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 118 105 100 101 114 73 68 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 69 120 116 101 114 110 97 108 68 78 83 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 69 120 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 68 78 83 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 97 116 116 97 99 104 97 98 108 101 45 118 111 108 117 109 101 115 45 97 119 115 45 101 98 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 97 116 116 97 99 104 97 98 108 101 45 118 111 108 117 109 101 115 45 97 119 115 45 101 98 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}} {protokube Update v1 2021-07-16 23:16:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 107 111 112 115 46 107 56 115 46 105 111 47 107 111 112 115 45 99 111 110 116 114 111 108 108 101 114 45 112 107 105 34 58 123 125 44 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-07-16 23:16:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 48 46 57 54 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kops-controller Update v1 2021-07-16 23:16:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 107 111 112 115 46 107 56 115 46 105 111 47 105 110 115 116 97 110 99 101 103 114 111 117 112 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 111 108 101 34 58 123 125 44 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 45 112 108 97 110 101 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-2a/i-08b9924af8793a99f,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{66549473280 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4064690176 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{59894525853 0} {<nil>} 59894525853 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959832576 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-07-16 23:16:20 +0000 UTC,LastTransitionTime:2021-07-16 23:16:20 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-16 23:37:10 +0000 UTC,LastTransitionTime:2021-07-16 23:15:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-16 23:37:10 +0000 UTC,LastTransitionTime:2021-07-16 23:15:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-16 23:37:10 +0000 UTC,LastTransitionTime:2021-07-16 23:15:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-16 23:37:10 +0000 UTC,LastTransitionTime:2021-07-16 23:16:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.183,},NodeAddress{Type:ExternalIP,Address:54.191.112.32,},NodeAddress{Type:Hostname,Address:ip-172-20-36-183.us-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-183.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-191-112-32.us-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2955c99f02cd2b29d8f14c5a37615f,SystemUUID:ec2955c9-9f02-cd2b-29d8-f14c5a37615f,BootID:d041dc58-038a-4d8e-8d2d-3ae9db329d20,KernelVersion:5.4.0-1047-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.4,KubeletVersion:v1.20.6,KubeProxyVersion:v1.20.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/protokube:1.20.0],SizeBytes:211600900,},ContainerImage{Names:[docker.io/kopeio/etcd-manager@sha256:302fcbff5dd7ce5ad8cdf6dd4bcf4c2931ab5bcac24a440dccaea57cecaedbdf docker.io/kopeio/etcd-manager:3.0.20210228],SizeBytes:166802389,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7c1710c965f55bca8d06ebd8d5774ecd9ef924f33fb024e424c2b9b565f477dc k8s.gcr.io/kube-proxy:v1.20.6],SizeBytes:49538907,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller@sha256:8a0021f9bc47e222533ab1ee243bedf7fe0a73ee935a86b026cde0faf396c03d k8s.gcr.io/kops/kops-controller:1.20.0],SizeBytes:40291151,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller@sha256:a4879f05bd93b8e6f3b27c92a9e41e3d23679cf1f48cbba688a3e0e134124e9e k8s.gcr.io/kops/dns-controller:1.20.0],SizeBytes:39014358,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e6d960baa4219fa810ee26da8fe8a92a1cf9dae83b6ad8bda0e17ee159c68501 k8s.gcr.io/kube-apiserver:v1.20.6],SizeBytes:30450356,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:a1a6e8dbcf0294175df5f248503c8792b3770c53535670e44a7724718fc93e87 k8s.gcr.io/kube-controller-manager:v1.20.6],SizeBytes:29543814,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ebb0350893fcfe7328140452f8a88ce682ec6f00337015a055d51b3fe0373429 k8s.gcr.io/kube-scheduler:v1.20.6],SizeBytes:14238397,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck@sha256:b6f9f4c7bc590fb469f368bb7692852a5a8bc6264cad3d0d5cdf5dbe931fef61 k8s.gcr.io/kops/kube-apiserver-healthcheck:1.20.0],SizeBytes:11824823,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jul 16 23:39:45.943: INFO: 
... skipping 55 lines ...
• Failure [604.565 seconds]
[fsx-csi-e2e] Dynamic Provisioning with s3 data repository
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/dynamic_provisioning_test.go:111
  should create a volume on demand with s3 as data repository [It]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/dynamic_provisioning_test.go:151

  Unexpected error:
      <*errors.errorString | 0xc0007d2020>: {
          s: "PersistentVolumeClaims [pvc-2vfvs] not all in phase Bound within 10m0s",
      }
      PersistentVolumeClaims [pvc-2vfvs] not all in phase Bound within 10m0s
  occurred

... skipping 11 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:262

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 246 lines ...
Jul 16 23:44:19.793: INFO: PersistentVolumeClaim pvc-xd8rj found but phase is Pending instead of Bound.
Jul 16 23:44:21.858: INFO: PersistentVolumeClaim pvc-xd8rj found and phase=Bound (7m1.311134797s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jul 16 23:44:22.050: INFO: Waiting up to 15m0s for pod "fsx-volume-tester-v4wmv" in namespace "fsx-8184" to be "success or failure"
Jul 16 23:44:22.114: INFO: Pod "fsx-volume-tester-v4wmv": Phase="Pending", Reason="", readiness=false. Elapsed: 64.019047ms
Jul 16 23:44:24.178: INFO: Pod "fsx-volume-tester-v4wmv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.127710689s
STEP: Saw pod success
Jul 16 23:44:24.178: INFO: Pod "fsx-volume-tester-v4wmv" satisfied condition "success or failure"
Jul 16 23:44:24.178: INFO: deleting Pod "fsx-8184"/"fsx-volume-tester-v4wmv"
... skipping 219 lines ...
Jul 16 23:44:47.100: INFO: PersistentVolumeClaim pvc-q8jb2 found but phase is Pending instead of Bound.
Jul 16 23:44:49.172: INFO: PersistentVolumeClaim pvc-q8jb2 found and phase=Bound (6m2.788691871s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jul 16 23:44:49.387: INFO: Waiting up to 15m0s for pod "fsx-volume-tester-svmqf" in namespace "fsx-975" to be "success or failure"
Jul 16 23:44:49.458: INFO: Pod "fsx-volume-tester-svmqf": Phase="Pending", Reason="", readiness=false. Elapsed: 70.873918ms
Jul 16 23:44:51.530: INFO: Pod "fsx-volume-tester-svmqf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.143042782s
STEP: Saw pod success
Jul 16 23:44:51.530: INFO: Pod "fsx-volume-tester-svmqf" satisfied condition "success or failure"
Jul 16 23:44:51.530: INFO: deleting Pod "fsx-975"/"fsx-volume-tester-svmqf"
... skipping 143 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:251

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 115 lines ...
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:45:53.849: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:286
Jul 16 23:45:54.171: INFO: Driver "fsx.csi.aws.com" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:152
Jul 16 23:45:54.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-8832" for this suite.
... skipping 3 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail to use a volume in a pod with mismatched mode [Slow] [It]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:286

      Driver "fsx.csi.aws.com" does not provide raw block - skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumes.go:99
------------------------------
... skipping 17 lines ...

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
FSx CSI Driver Conformance [Driver: fsx.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:273

[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:39:47.675: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:273
STEP: PrepareTest
Jul 16 23:39:48.028: INFO: Test running for native CSI Driver, not checking metrics
Jul 16 23:46:19.918: INFO: Creating resource for pre-provisioned PV
Jul 16 23:46:19.918: INFO: Creating PVC and PV
STEP: Creating a PVC followed by a PV
Jul 16 23:46:20.048: INFO: Waiting for PV fsx.csi.aws.com-r7wdt to bind to PVC pvc-5qtq2
Jul 16 23:46:20.048: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-5qtq2] to have phase Bound
Jul 16 23:46:20.109: INFO: PersistentVolumeClaim pvc-5qtq2 found but phase is Pending instead of Bound.
Jul 16 23:46:22.170: INFO: PersistentVolumeClaim pvc-5qtq2 found and phase=Bound (2.122430321s)
Jul 16 23:46:22.170: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-r7wdt to have phase Bound
Jul 16 23:46:22.232: INFO: PersistentVolume fsx.csi.aws.com-r7wdt found and phase=Bound (61.083246ms)
STEP: Creating pod pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-5phb
STEP: Checking for subpath error in container status
Jul 16 23:46:26.541: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-5phb" in namespace "provisioning-6055"
Jul 16 23:46:26.606: INFO: Wait up to 5m0s for pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-5phb" to be fully deleted
STEP: Deleting pod
Jul 16 23:46:32.730: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-5phb" in namespace "provisioning-6055"
STEP: Deleting pv and pvc
Jul 16 23:46:32.791: INFO: Deleting PersistentVolumeClaim "pvc-5qtq2"
... skipping 9 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath with backstepping is outside the volume [Slow]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:273
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
Jul 16 23:46:33.169: INFO: Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 330 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:273

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 77 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:273

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 164 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:262

      Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 130 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:262

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 143 lines ...

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
FSx CSI Driver Conformance [Driver: fsx.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:262

[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:53:17.309: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:262
STEP: PrepareTest
Jul 16 23:53:17.622: INFO: Test running for native CSI Driver, not checking metrics
Jul 17 00:00:19.139: INFO: Creating resource for pre-provisioned PV
Jul 17 00:00:19.140: INFO: Creating PVC and PV
STEP: Creating a PVC followed by a PV
... skipping 2 lines ...
Jul 17 00:00:19.327: INFO: PersistentVolumeClaim pvc-ttljg found but phase is Pending instead of Bound.
Jul 17 00:00:21.390: INFO: PersistentVolumeClaim pvc-ttljg found but phase is Pending instead of Bound.
Jul 17 00:00:23.452: INFO: PersistentVolumeClaim pvc-ttljg found and phase=Bound (4.18554118s)
Jul 17 00:00:23.452: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-pgfk4 to have phase Bound
Jul 17 00:00:23.513: INFO: PersistentVolume fsx.csi.aws.com-pgfk4 found and phase=Bound (61.563835ms)
STEP: Creating pod pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-g6p8
STEP: Checking for subpath error in container status
Jul 17 00:00:27.827: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-g6p8" in namespace "provisioning-7775"
Jul 17 00:00:27.905: INFO: Wait up to 5m0s for pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-g6p8" to be fully deleted
STEP: Deleting pod
Jul 17 00:00:34.028: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-g6p8" in namespace "provisioning-7775"
STEP: Deleting pv and pvc
Jul 17 00:00:34.090: INFO: Deleting PersistentVolumeClaim "pvc-ttljg"
... skipping 9 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:262
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
... skipping 161 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:235

      Driver fsx.csi.aws.com doesn't support InlineVolume -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 231 lines ...
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should not mount / map unused volumes in a pod
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:333
------------------------------
FSx CSI Driver Conformance [Driver: fsx.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode 
  should fail to create pod by failing to mount volume [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:194

[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 16 23:52:15.052: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16668.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create pod by failing to mount volume [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:194
STEP: PrepareTest
Jul 16 23:52:15.388: INFO: Test running for native CSI Driver, not checking metrics
STEP: Creating sc
STEP: Creating pv and pvc
Jul 16 23:58:47.023: INFO: Waiting for PV pvpv7nm to bind to PVC pvc-b8w45
... skipping 22 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail to create pod by failing to mount volume [Slow]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/volumemode.go:194
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:101
... skipping 7 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:235

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 244 lines ...
FSx CSI Driver Conformance
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:181
  [Driver: fsx.csi.aws.com]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:100
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:235

      Driver fsx.csi.aws.com doesn't support DynamicPV -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/base.go:154
------------------------------
... skipping 158 lines ...
Jul 17 00:09:11.417: INFO: Pod "volume-prep-provisioning-162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.121689705s
STEP: Saw pod success
Jul 17 00:09:11.417: INFO: Pod "volume-prep-provisioning-162" satisfied condition "success or failure"
Jul 17 00:09:11.417: INFO: Deleting pod "volume-prep-provisioning-162" in namespace "provisioning-162"
Jul 17 00:09:11.485: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-162" to be fully deleted
STEP: Creating pod pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-b4vs
STEP: Checking for subpath error in container status
Jul 17 00:09:13.733: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-b4vs" in namespace "provisioning-162"
Jul 17 00:09:13.796: INFO: Wait up to 5m0s for pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-b4vs" to be fully deleted
STEP: Deleting pod
Jul 17 00:09:15.917: INFO: Deleting pod "pod-subpath-test-fsx-csi-aws-com-preprovisionedpv-b4vs" in namespace "provisioning-162"
STEP: Deleting pv and pvc
Jul 17 00:09:15.978: INFO: Deleting PersistentVolumeClaim "pvc-z55lz"
... skipping 16 lines ...
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.17.0/test/e2e/storage/testsuites/subpath.go:417
------------------------------


Summarizing 1 Failure:

[Fail] [fsx-csi-e2e] Dynamic Provisioning with s3 data repository [It] should create a volume on demand with s3 as data repository 
/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/testsuites/testsuites.go:155

Ran 24 of 150 Specs in 2786.587 seconds
FAIL! -- 23 Passed | 1 Failed | 0 Pending | 126 Skipped


Ginkgo ran 1 suite in 50m45.233720079s
Test Suite Failed
+ TEST_PASSED=1
+ set -e
+ set +x
###
## TEST_PASSED: 1
#
###
## Printing pod fsx-csi-controller-5b78b8d547-qrj6z fsx-plugin container logs
#
I0716 23:18:10.911640       1 driver.go:99] Listening for connections on address: &net.UnixAddr{Name:"/var/lib/csi/sockets/pluginproxy/csi.sock", Net:"unix"}
E0716 23:34:46.074363       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Filesystem is not ready: RequestCanceled: request context canceled
caused by: context canceled
E0716 23:39:44.963273       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Filesystem is not ready: RequestCanceled: request context canceled
caused by: context canceled
E0716 23:40:45.000599       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Filesystem is not ready: unexpected state for filesystem fs-0e5640bb4eec6c419: "FAILED"
E0716 23:42:21.341258       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Filesystem is not ready: RequestCanceled: request context canceled
caused by: context canceled
E0716 23:43:47.530126       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Filesystem is not ready: RequestCanceled: request context canceled
caused by: context deadline exceeded
###
## Printing pod fsx-csi-node-zqkgz fsx-plugin container logs
#
I0716 23:18:10.828662       1 driver.go:99] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
E0716 23:58:54.329983       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:58:54.330287       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0716 23:58:55.434739       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:58:55.434798       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0716 23:58:57.451132       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:58:57.451202       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0716 23:59:01.484074       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:59:01.484134       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0716 23:59:09.547562       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:59:09.547614       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0716 23:59:25.570351       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:59:25.570553       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0716 23:59:57.627587       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0716 23:59:57.627657       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
E0717 00:01:01.704779       1 mount_linux.go:140] Mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb

E0717 00:01:01.704842       1 driver.go:86] GRPC error: rpc error: code = Internal desc = Could not mount "fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx" at "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb": mount failed: exit status 17
Mounting command: mount
Mounting arguments: -t lustre fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
Output: mount.lustre: according to /etc/mtab fs-08a7995e3a17bdc92.fsx.us-west-2.amazonaws.com@tcp:/fsx is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvpv7nm/2be52611-49c7-4262-9bf5-fe98cd40dceb
###
## Cleaning
#
... skipping 676 lines ...
	security-group:sg-052ccc42f79e6956b
	vpc:vpc-039080c847230bd0f
	route-table:rtb-04854d31d5fc07666
	subnet:subnet-04969971ee6f305f5

not making progress deleting resources; giving up
make: *** [Makefile:52: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...