This job view page is being replaced by Spyglass soon. Check out the new job view.
PRvdhanan: release v1.0.0
ResultFAILURE
Tests 1 failed / 56 succeeded
Started2021-04-30 23:54
Elapsed31m55s
Revision7c693a515ef3349080a6c33a23f3e0a244df89a3
Refs 865

Test Failures


EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion 2m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EBS\sCSI\sMigration\sSuite\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sPre\-provisioned\sSnapshot\s\(retain\spolicy\)\]\ssnapshottable\[Feature\:VolumeSnapshotDataSource\]\svolume\ssnapshot\scontroller\s\sshould\scheck\ssnapshot\sfields\,\scheck\srestore\scorrectly\sworks\safter\smodifying\ssource\sdata\,\scheck\sdeletion$'
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-6240 exec restored-pvc-tester-bfhjn --namespace=snapshotting-6240 -- cat /mnt/test/data:\nCommand stdout:\n\nstderr:\ncat: can't open '/mnt/test/data': No such file or directory\ncommand terminated with exit code 1\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-6240 exec restored-pvc-tester-bfhjn --namespace=snapshotting-6240 -- cat /mnt/test/data:
    Command stdout:
    
    stderr:
    cat: can't open '/mnt/test/data': No such file or directory
    command terminated with exit code 1
    
    error:
    exit status 1
occurred
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:602
				
				Click to see stdout/stderrfrom junit_04.xml

Filter through log files


Show 56 Passed Tests

Show 428 Skipped Tests

Error lines from build-log.txt

... skipping 769 lines ...
## Validating cluster test-cluster-17373.k8s.local
#
Using cluster from kubectl context: test-cluster-17373.k8s.local

Validating cluster test-cluster-17373.k8s.local

W0430 23:57:54.920978    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:58:04.972787    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:58:15.007426    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:58:25.046032    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:58:35.096513    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:58:45.129581    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:58:55.169939    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0430 23:59:17.004516    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0430 23:59:38.674143    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0501 00:00:00.320729    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0501 00:00:22.221202    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W0501 00:00:43.900818    9273 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

NODE STATUS
... skipping 5 lines ...
Machine	i-052f833e5dfe87546							machine "i-052f833e5dfe87546" has not yet joined cluster
Machine	i-0b078b3fe0a3aca4a							machine "i-0b078b3fe0a3aca4a" has not yet joined cluster
Machine	i-0bb146f105471fb17							machine "i-0bb146f105471fb17" has not yet joined cluster
Node	ip-172-20-54-18.us-west-2.compute.internal				master "ip-172-20-54-18.us-west-2.compute.internal" is not ready
Pod	kube-system/kube-proxy-ip-172-20-54-18.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-54-18.us-west-2.compute.internal" is pending

Validation Failed
W0501 00:00:55.613808    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 8 lines ...
Machine	i-0bb146f105471fb17				machine "i-0bb146f105471fb17" has not yet joined cluster
Node	ip-172-20-54-18.us-west-2.compute.internal	master "ip-172-20-54-18.us-west-2.compute.internal" is not ready
Pod	kube-system/coredns-5489b75945-dsz9c		system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/dns-controller-8574dcc89d-8q9d4	system-cluster-critical pod "dns-controller-8574dcc89d-8q9d4" is pending

Validation Failed
W0501 00:01:06.771406    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 7 lines ...
Machine	i-0b078b3fe0a3aca4a				machine "i-0b078b3fe0a3aca4a" has not yet joined cluster
Machine	i-0bb146f105471fb17				machine "i-0bb146f105471fb17" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-dsz9c		system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/dns-controller-8574dcc89d-8q9d4	system-cluster-critical pod "dns-controller-8574dcc89d-8q9d4" is pending

Validation Failed
W0501 00:01:17.914250    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 7 lines ...
Machine	i-0b078b3fe0a3aca4a				machine "i-0b078b3fe0a3aca4a" has not yet joined cluster
Machine	i-0bb146f105471fb17				machine "i-0bb146f105471fb17" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-dsz9c		system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/dns-controller-8574dcc89d-8q9d4	system-cluster-critical pod "dns-controller-8574dcc89d-8q9d4" is pending

Validation Failed
W0501 00:01:29.065535    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 7 lines ...
Machine	i-0b078b3fe0a3aca4a				machine "i-0b078b3fe0a3aca4a" has not yet joined cluster
Machine	i-0bb146f105471fb17				machine "i-0bb146f105471fb17" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-dsz9c		system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/dns-controller-8574dcc89d-8q9d4	system-cluster-critical pod "dns-controller-8574dcc89d-8q9d4" is pending

Validation Failed
W0501 00:01:40.147404    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 7 lines ...
Machine	i-0b078b3fe0a3aca4a				machine "i-0b078b3fe0a3aca4a" has not yet joined cluster
Machine	i-0bb146f105471fb17				machine "i-0bb146f105471fb17" has not yet joined cluster
Pod	kube-system/coredns-5489b75945-dsz9c		system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/dns-controller-8574dcc89d-8q9d4	system-cluster-critical pod "dns-controller-8574dcc89d-8q9d4" is pending

Validation Failed
W0501 00:01:51.287591    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 10 lines ...
Node	ip-172-20-54-18.us-west-2.compute.internal					master "ip-172-20-54-18.us-west-2.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-5489b75945-dsz9c						system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4					system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/etcd-manager-main-ip-172-20-54-18.us-west-2.compute.internal	system-cluster-critical pod "etcd-manager-main-ip-172-20-54-18.us-west-2.compute.internal" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-54-18.us-west-2.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-54-18.us-west-2.compute.internal" is pending

Validation Failed
W0501 00:02:02.441814    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 8 lines ...
Machine	i-0bb146f105471fb17							machine "i-0bb146f105471fb17" has not yet joined cluster
Node	ip-172-20-54-18.us-west-2.compute.internal				master "ip-172-20-54-18.us-west-2.compute.internal" is missing kube-apiserver pod
Pod	kube-system/coredns-5489b75945-dsz9c					system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4				system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/kube-apiserver-ip-172-20-54-18.us-west-2.compute.internal	system-cluster-critical pod "kube-apiserver-ip-172-20-54-18.us-west-2.compute.internal" is pending

Validation Failed
W0501 00:02:13.617563    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 11 lines ...
Pod	kube-system/coredns-5489b75945-dsz9c						system-cluster-critical pod "coredns-5489b75945-dsz9c" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2mzv4					system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2mzv4" is pending
Pod	kube-system/etcd-manager-events-ip-172-20-54-18.us-west-2.compute.internal	system-cluster-critical pod "etcd-manager-events-ip-172-20-54-18.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-36-126.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-36-126.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-43-157.us-west-2.compute.internal		system-node-critical pod "kube-proxy-ip-172-20-43-157.us-west-2.compute.internal" is pending

Validation Failed
W0501 00:02:24.771025    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 6 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5489b75945-974j5	system-cluster-critical pod "coredns-5489b75945-974j5" is pending
Pod	kube-system/coredns-5489b75945-dsz9c	system-cluster-critical pod "coredns-5489b75945-dsz9c" is not ready (coredns)

Validation Failed
W0501 00:02:36.080924    9273 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c4.large	3	3	us-west-2a

... skipping 152 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "ebs.csi.aws.com" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SSSSSS
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  1 00:04:41.314: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig
STEP: Building a namespace api object, basename provisioning
May  1 00:04:41.752: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
May  1 00:04:41.882: INFO: Creating resource for dynamic PV
May  1 00:04:41.882: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8911-e2e-sc4kdnh
STEP: creating a claim
May  1 00:04:41.949: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-lf8g
STEP: Checking for subpath error in container status
May  1 00:05:14.294: INFO: Deleting pod "pod-subpath-test-dynamicpv-lf8g" in namespace "provisioning-8911"
May  1 00:05:14.360: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-lf8g" to be fully deleted
STEP: Deleting pod
May  1 00:05:24.490: INFO: Deleting pod "pod-subpath-test-dynamicpv-lf8g" in namespace "provisioning-8911"
STEP: Deleting pvc
May  1 00:05:24.686: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.compvlwx"
... skipping 9 lines ...

• [SLOW TEST:48.830 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
May  1 00:05:30.149: INFO: Driver "ebs.csi.aws.com" does not support volume type "InlineVolume" - skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 50 lines ...
May  1 00:04:41.831: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-820-e2e-scnrhtj
STEP: creating a claim
May  1 00:04:41.895: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rklm
STEP: Creating a pod to test atomic-volume-subpath
May  1 00:04:42.088: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rklm" in namespace "provisioning-820" to be "Succeeded or Failed"
May  1 00:04:42.155: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Pending", Reason="", readiness=false. Elapsed: 66.882873ms
May  1 00:04:44.222: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134467609s
May  1 00:04:46.285: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197554635s
May  1 00:04:48.349: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261360509s
May  1 00:04:50.412: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324661157s
May  1 00:04:52.476: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388507073s
... skipping 15 lines ...
May  1 00:05:25.489: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Running", Reason="", readiness=true. Elapsed: 43.400891463s
May  1 00:05:27.552: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Running", Reason="", readiness=true. Elapsed: 45.464472447s
May  1 00:05:29.615: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Running", Reason="", readiness=true. Elapsed: 47.527300213s
May  1 00:05:31.680: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Running", Reason="", readiness=true. Elapsed: 49.592218804s
May  1 00:05:33.743: INFO: Pod "pod-subpath-test-dynamicpv-rklm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 51.655292675s
STEP: Saw pod success
May  1 00:05:33.743: INFO: Pod "pod-subpath-test-dynamicpv-rklm" satisfied condition "Succeeded or Failed"
May  1 00:05:33.806: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-rklm container test-container-subpath-dynamicpv-rklm: <nil>
STEP: delete the pod
May  1 00:05:33.943: INFO: Waiting for pod pod-subpath-test-dynamicpv-rklm to disappear
May  1 00:05:34.005: INFO: Pod pod-subpath-test-dynamicpv-rklm no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rklm
May  1 00:05:34.005: INFO: Deleting pod "pod-subpath-test-dynamicpv-rklm" in namespace "provisioning-820"
... skipping 82 lines ...
May  1 00:04:41.852: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2828-e2e-sc9lnb8
STEP: creating a claim
May  1 00:04:41.915: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hqmc
STEP: Creating a pod to test subpath
May  1 00:04:42.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hqmc" in namespace "provisioning-2828" to be "Succeeded or Failed"
May  1 00:04:42.175: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 65.066286ms
May  1 00:04:44.237: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126859605s
May  1 00:04:46.298: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187832929s
May  1 00:04:48.360: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250235961s
May  1 00:04:50.421: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.310682515s
May  1 00:04:52.482: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.371967173s
... skipping 6 lines ...
May  1 00:05:06.919: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.809348684s
May  1 00:05:08.982: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.872164326s
May  1 00:05:11.044: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.933387906s
May  1 00:05:13.104: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.994165818s
May  1 00:05:15.165: INFO: Pod "pod-subpath-test-dynamicpv-hqmc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.055255433s
STEP: Saw pod success
May  1 00:05:15.165: INFO: Pod "pod-subpath-test-dynamicpv-hqmc" satisfied condition "Succeeded or Failed"
May  1 00:05:15.226: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-hqmc container test-container-subpath-dynamicpv-hqmc: <nil>
STEP: delete the pod
May  1 00:05:15.357: INFO: Waiting for pod pod-subpath-test-dynamicpv-hqmc to disappear
May  1 00:05:15.421: INFO: Pod pod-subpath-test-dynamicpv-hqmc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hqmc
May  1 00:05:15.421: INFO: Deleting pod "pod-subpath-test-dynamicpv-hqmc" in namespace "provisioning-2828"
... skipping 88 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "ebs.csi.aws.com" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 17 lines ...
May  1 00:05:30.483: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-6193-e2e-scqhknd
STEP: creating a claim
May  1 00:05:30.549: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-mh2t
STEP: Creating a pod to test exec-volume-test
May  1 00:05:30.749: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-mh2t" in namespace "volume-6193" to be "Succeeded or Failed"
May  1 00:05:30.813: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 64.182839ms
May  1 00:05:32.879: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130098721s
May  1 00:05:34.944: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195234331s
May  1 00:05:37.011: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261873605s
May  1 00:05:39.077: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32785226s
May  1 00:05:41.142: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.393324518s
May  1 00:05:43.208: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.458895727s
May  1 00:05:45.274: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Pending", Reason="", readiness=false. Elapsed: 14.525067471s
May  1 00:05:47.340: INFO: Pod "exec-volume-test-dynamicpv-mh2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.591560759s
STEP: Saw pod success
May  1 00:05:47.341: INFO: Pod "exec-volume-test-dynamicpv-mh2t" satisfied condition "Succeeded or Failed"
May  1 00:05:47.405: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod exec-volume-test-dynamicpv-mh2t container exec-container-dynamicpv-mh2t: <nil>
STEP: delete the pod
May  1 00:05:47.544: INFO: Waiting for pod exec-volume-test-dynamicpv-mh2t to disappear
May  1 00:05:47.608: INFO: Pod exec-volume-test-dynamicpv-mh2t no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-mh2t
May  1 00:05:47.608: INFO: Deleting pod "exec-volume-test-dynamicpv-mh2t" in namespace "volume-6193"
... skipping 82 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "ebs.csi.aws.com" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 76 lines ...
May  1 00:04:41.869: INFO: Creating resource for dynamic PV
May  1 00:04:41.869: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-6240-e2e-sc4wvc4
STEP: creating a claim
May  1 00:04:41.933: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: [init] starting a pod to use the claim
May  1 00:04:42.127: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-564c6" in namespace "snapshotting-6240" to be "Succeeded or Failed"
May  1 00:04:42.190: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 63.551786ms
May  1 00:04:44.255: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127884274s
May  1 00:04:46.318: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191208525s
May  1 00:04:48.383: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255687746s
May  1 00:04:50.446: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318880132s
May  1 00:04:52.510: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.383112032s
... skipping 3 lines ...
May  1 00:05:00.765: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.63827342s
May  1 00:05:02.830: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.703237107s
May  1 00:05:04.893: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.766545342s
May  1 00:05:06.957: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.830068294s
May  1 00:05:09.021: INFO: Pod "pvc-snapshottable-tester-564c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.893647663s
STEP: Saw pod success
May  1 00:05:09.021: INFO: Pod "pvc-snapshottable-tester-564c6" satisfied condition "Succeeded or Failed"
May  1 00:05:09.632: INFO: Pod pvc-snapshottable-tester-564c6 has the following logs: 
May  1 00:05:09.632: INFO: Deleting pod "pvc-snapshottable-tester-564c6" in namespace "snapshotting-6240"
May  1 00:05:09.699: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-564c6" to be fully deleted
May  1 00:05:09.762: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [ebs.csi.aws.commk8x4] to have phase Bound
May  1 00:05:09.829: INFO: PersistentVolumeClaim ebs.csi.aws.commk8x4 found and phase=Bound (67.120587ms)
STEP: [init] checking the claim
... skipping 35 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  1 00:05:25.641: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-hxxc2" in namespace "snapshotting-6240" to be "Succeeded or Failed"
May  1 00:05:25.704: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 62.872195ms
May  1 00:05:27.769: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127798725s
May  1 00:05:29.833: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192184604s
May  1 00:05:31.897: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256477579s
May  1 00:05:33.963: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32192904s
May  1 00:05:36.030: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389247313s
May  1 00:05:38.095: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.454584798s
May  1 00:05:40.160: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.519193255s
May  1 00:05:42.225: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.583928267s
May  1 00:05:44.289: INFO: Pod "pvc-snapshottable-data-tester-hxxc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.648327759s
STEP: Saw pod success
May  1 00:05:44.289: INFO: Pod "pvc-snapshottable-data-tester-hxxc2" satisfied condition "Succeeded or Failed"
May  1 00:05:44.422: INFO: Pod pvc-snapshottable-data-tester-hxxc2 has the following logs: 
May  1 00:05:44.422: INFO: Deleting pod "pvc-snapshottable-data-tester-hxxc2" in namespace "snapshotting-6240"
May  1 00:05:44.490: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-hxxc2" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
May  1 00:05:58.810: INFO: Running '/usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-6240 exec restored-pvc-tester-bfhjn --namespace=snapshotting-6240 -- cat /mnt/test/data'
... skipping 168 lines ...
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:108
      
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:225
        should check snapshot fields, check restore correctly works after modifying source data, check deletion [It]
        /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243

        Unexpected error:
            <exec.CodeExitError>: {
                Err: {
                    s: "error running /usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-6240 exec restored-pvc-tester-bfhjn --namespace=snapshotting-6240 -- cat /mnt/test/data:\nCommand stdout:\n\nstderr:\ncat: can't open '/mnt/test/data': No such file or directory\ncommand terminated with exit code 1\n\nerror:\nexit status 1",
                },
                Code: 1,
            }
            error running /usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-6240 exec restored-pvc-tester-bfhjn --namespace=snapshotting-6240 -- cat /mnt/test/data:
            Command stdout:
            
            stderr:
            cat: can't open '/mnt/test/data': No such file or directory
            command terminated with exit code 1
            
            error:
            exit status 1
        occurred

        /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:602
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
... skipping 114 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "ebs.csi.aws.com" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 99 lines ...
May  1 00:06:45.318: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1404-e2e-sc5m8ks
STEP: creating a claim
May  1 00:06:45.381: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-dbsm
STEP: Creating a pod to test multi_subpath
May  1 00:06:45.573: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dbsm" in namespace "provisioning-1404" to be "Succeeded or Failed"
May  1 00:06:45.636: INFO: Pod "pod-subpath-test-dynamicpv-dbsm": Phase="Pending", Reason="", readiness=false. Elapsed: 62.682813ms
May  1 00:06:47.699: INFO: Pod "pod-subpath-test-dynamicpv-dbsm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126153343s
May  1 00:06:49.763: INFO: Pod "pod-subpath-test-dynamicpv-dbsm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189747966s
May  1 00:06:51.826: INFO: Pod "pod-subpath-test-dynamicpv-dbsm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253451s
May  1 00:06:53.892: INFO: Pod "pod-subpath-test-dynamicpv-dbsm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319089116s
May  1 00:06:55.956: INFO: Pod "pod-subpath-test-dynamicpv-dbsm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.382591112s
STEP: Saw pod success
May  1 00:06:55.956: INFO: Pod "pod-subpath-test-dynamicpv-dbsm" satisfied condition "Succeeded or Failed"
May  1 00:06:56.018: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-dbsm container test-container-subpath-dynamicpv-dbsm: <nil>
STEP: delete the pod
May  1 00:06:56.156: INFO: Waiting for pod pod-subpath-test-dynamicpv-dbsm to disappear
May  1 00:06:56.219: INFO: Pod pod-subpath-test-dynamicpv-dbsm no longer exists
STEP: Deleting pod
May  1 00:06:56.219: INFO: Deleting pod "pod-subpath-test-dynamicpv-dbsm" in namespace "provisioning-1404"
... skipping 596 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "ebs.csi.aws.com" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 99 lines ...
May  1 00:07:12.713: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-33-e2e-scqqxcd
STEP: creating a claim
May  1 00:07:12.777: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-5z7z
STEP: Creating a pod to test subpath
May  1 00:07:12.970: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5z7z" in namespace "provisioning-33" to be "Succeeded or Failed"
May  1 00:07:13.033: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 62.512328ms
May  1 00:07:15.097: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126252387s
May  1 00:07:17.162: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191475783s
May  1 00:07:19.226: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255936311s
May  1 00:07:21.289: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318995499s
May  1 00:07:23.353: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.383009164s
May  1 00:07:25.418: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.447397689s
May  1 00:07:27.483: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.512426147s
May  1 00:07:29.547: INFO: Pod "pod-subpath-test-dynamicpv-5z7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.576626724s
STEP: Saw pod success
May  1 00:07:29.547: INFO: Pod "pod-subpath-test-dynamicpv-5z7z" satisfied condition "Succeeded or Failed"
May  1 00:07:29.610: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-5z7z container test-container-subpath-dynamicpv-5z7z: <nil>
STEP: delete the pod
May  1 00:07:29.753: INFO: Waiting for pod pod-subpath-test-dynamicpv-5z7z to disappear
May  1 00:07:29.816: INFO: Pod pod-subpath-test-dynamicpv-5z7z no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5z7z
May  1 00:07:29.816: INFO: Deleting pod "pod-subpath-test-dynamicpv-5z7z" in namespace "provisioning-33"
... skipping 247 lines ...

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SSS
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  1 00:07:46.111: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
May  1 00:07:46.424: INFO: Creating resource for dynamic PV
May  1 00:07:46.424: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-624-e2e-schqfv6
STEP: creating a claim
STEP: Creating pod
STEP: Waiting for the pod to fail
May  1 00:07:52.890: INFO: Deleting pod "pod-d3b0e027-bcd8-4127-89b0-90c5fe0e046c" in namespace "volumemode-624"
May  1 00:07:52.955: INFO: Wait up to 5m0s for pod "pod-d3b0e027-bcd8-4127-89b0-90c5fe0e046c" to be fully deleted
STEP: Deleting pvc
May  1 00:08:05.208: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comb4ndg"
May  1 00:08:05.273: INFO: Waiting up to 5m0s for PersistentVolume pvc-0ffcede1-db1d-4490-8274-2345c78b7a3e to get deleted
May  1 00:08:05.335: INFO: PersistentVolume pvc-0ffcede1-db1d-4490-8274-2345c78b7a3e found and phase=Released (62.639442ms)
... skipping 7 lines ...

• [SLOW TEST:24.485 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
... skipping 171 lines ...
May  1 00:07:59.879: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-6613-e2e-sctq5wp
STEP: creating a claim
May  1 00:07:59.940: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-kh6n
STEP: Creating a pod to test exec-volume-test
May  1 00:08:00.124: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-kh6n" in namespace "volume-6613" to be "Succeeded or Failed"
May  1 00:08:00.185: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 60.646608ms
May  1 00:08:02.246: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121466508s
May  1 00:08:04.308: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18350497s
May  1 00:08:06.368: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244111184s
May  1 00:08:08.431: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307083215s
May  1 00:08:10.493: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.368485392s
May  1 00:08:12.553: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.428972102s
May  1 00:08:14.615: INFO: Pod "exec-volume-test-dynamicpv-kh6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.490673959s
STEP: Saw pod success
May  1 00:08:14.615: INFO: Pod "exec-volume-test-dynamicpv-kh6n" satisfied condition "Succeeded or Failed"
May  1 00:08:14.675: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod exec-volume-test-dynamicpv-kh6n container exec-container-dynamicpv-kh6n: <nil>
STEP: delete the pod
May  1 00:08:14.806: INFO: Waiting for pod exec-volume-test-dynamicpv-kh6n to disappear
May  1 00:08:14.866: INFO: Pod exec-volume-test-dynamicpv-kh6n no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-kh6n
May  1 00:08:14.866: INFO: Deleting pod "exec-volume-test-dynamicpv-kh6n" in namespace "volume-6613"
... skipping 79 lines ...
May  1 00:08:11.355: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-8492-e2e-scm59pj
STEP: creating a claim
May  1 00:08:11.419: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-pzgd
STEP: Creating a pod to test exec-volume-test
May  1 00:08:11.610: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-pzgd" in namespace "volume-8492" to be "Succeeded or Failed"
May  1 00:08:11.672: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 62.705623ms
May  1 00:08:13.736: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126461943s
May  1 00:08:15.800: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190662419s
May  1 00:08:17.866: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256192535s
May  1 00:08:19.929: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319428103s
May  1 00:08:21.993: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.383409982s
May  1 00:08:24.056: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.446467927s
May  1 00:08:26.120: INFO: Pod "exec-volume-test-dynamicpv-pzgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.510180392s
STEP: Saw pod success
May  1 00:08:26.120: INFO: Pod "exec-volume-test-dynamicpv-pzgd" satisfied condition "Succeeded or Failed"
May  1 00:08:26.183: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod exec-volume-test-dynamicpv-pzgd container exec-container-dynamicpv-pzgd: <nil>
STEP: delete the pod
May  1 00:08:26.320: INFO: Waiting for pod exec-volume-test-dynamicpv-pzgd to disappear
May  1 00:08:26.383: INFO: Pod exec-volume-test-dynamicpv-pzgd no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-pzgd
May  1 00:08:26.383: INFO: Deleting pod "exec-volume-test-dynamicpv-pzgd" in namespace "volume-8492"
... skipping 48 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver "ebs.csi.aws.com" does not support topology - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 6 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver "ebs.csi.aws.com" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 37 lines ...
May  1 00:08:21.947: INFO: Creating resource for dynamic PV
May  1 00:08:21.947: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(ebs.csi.aws.com) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-6416-e2e-sc2jng2
STEP: creating a claim
STEP: Expanding non-expandable pvc
May  1 00:08:22.144: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
May  1 00:08:22.275: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:24.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:26.405: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:28.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:30.405: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:32.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:34.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:36.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:38.407: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:40.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:42.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:44.409: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:46.407: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:48.406: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:50.407: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:52.408: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6416-e2e-sc2jng2",
  	... // 2 identical fields
  }

May  1 00:08:52.537: INFO: Error updating pvc ebs.csi.aws.com6sf76: PersistentVolumeClaim "ebs.csi.aws.com6sf76" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 110 lines ...
May  1 00:08:47.501: INFO: Creating resource for dynamic PV
May  1 00:08:47.501: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5969-e2e-sc54g2n
STEP: creating a claim
May  1 00:08:47.564: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod to format volume volume-prep-provisioning-5969
May  1 00:08:47.752: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-5969" in namespace "provisioning-5969" to be "Succeeded or Failed"
May  1 00:08:47.814: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 61.595905ms
May  1 00:08:49.876: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123798734s
May  1 00:08:51.938: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185685234s
May  1 00:08:54.001: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248779539s
May  1 00:08:56.064: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311998607s
May  1 00:08:58.128: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 10.375646861s
May  1 00:09:00.190: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 12.437405013s
May  1 00:09:02.252: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 14.500114525s
May  1 00:09:04.314: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 16.561947958s
May  1 00:09:06.376: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 18.623874729s
May  1 00:09:08.439: INFO: Pod "volume-prep-provisioning-5969": Phase="Pending", Reason="", readiness=false. Elapsed: 20.686979921s
May  1 00:09:10.502: INFO: Pod "volume-prep-provisioning-5969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.749241709s
STEP: Saw pod success
May  1 00:09:10.502: INFO: Pod "volume-prep-provisioning-5969" satisfied condition "Succeeded or Failed"
May  1 00:09:10.502: INFO: Deleting pod "volume-prep-provisioning-5969" in namespace "provisioning-5969"
May  1 00:09:10.572: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-5969" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-77f6
STEP: Checking for subpath error in container status
May  1 00:09:28.818: INFO: Deleting pod "pod-subpath-test-dynamicpv-77f6" in namespace "provisioning-5969"
May  1 00:09:28.887: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-77f6" to be fully deleted
STEP: Deleting pod
May  1 00:09:28.949: INFO: Deleting pod "pod-subpath-test-dynamicpv-77f6" in namespace "provisioning-5969"
STEP: Deleting pvc
May  1 00:09:29.132: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.com5hml8"
... skipping 499 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver ebs.csi.aws.com doesn't support ntfs -- skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
... skipping 174 lines ...
May  1 00:09:44.942: INFO: Creating resource for dynamic PV
May  1 00:09:44.942: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-6375-e2e-scnl8lx
STEP: creating a claim
May  1 00:09:45.008: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: [init] starting a pod to use the claim
May  1 00:09:45.197: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-clpx2" in namespace "snapshotting-6375" to be "Succeeded or Failed"
May  1 00:09:45.258: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 61.06468ms
May  1 00:09:47.320: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123446787s
May  1 00:09:49.382: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185472707s
May  1 00:09:51.446: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249204146s
May  1 00:09:53.509: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312359719s
May  1 00:09:55.571: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.374395121s
May  1 00:09:57.638: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.441378153s
May  1 00:09:59.701: INFO: Pod "pvc-snapshottable-tester-clpx2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.504108664s
STEP: Saw pod success
May  1 00:09:59.701: INFO: Pod "pvc-snapshottable-tester-clpx2" satisfied condition "Succeeded or Failed"
May  1 00:09:59.831: INFO: Pod pvc-snapshottable-tester-clpx2 has the following logs: 
May  1 00:09:59.831: INFO: Deleting pod "pvc-snapshottable-tester-clpx2" in namespace "snapshotting-6375"
May  1 00:09:59.902: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-clpx2" to be fully deleted
May  1 00:09:59.963: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [ebs.csi.aws.comp9xsj] to have phase Bound
May  1 00:10:00.025: INFO: PersistentVolumeClaim ebs.csi.aws.comp9xsj found and phase=Bound (62.2066ms)
STEP: [init] checking the claim
... skipping 21 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  1 00:10:23.431: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-g57gw" in namespace "snapshotting-6375" to be "Succeeded or Failed"
May  1 00:10:23.493: INFO: Pod "pvc-snapshottable-data-tester-g57gw": Phase="Pending", Reason="", readiness=false. Elapsed: 61.665058ms
May  1 00:10:25.566: INFO: Pod "pvc-snapshottable-data-tester-g57gw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134407188s
May  1 00:10:27.628: INFO: Pod "pvc-snapshottable-data-tester-g57gw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196380503s
May  1 00:10:29.690: INFO: Pod "pvc-snapshottable-data-tester-g57gw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258765152s
May  1 00:10:31.753: INFO: Pod "pvc-snapshottable-data-tester-g57gw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321301958s
May  1 00:10:33.818: INFO: Pod "pvc-snapshottable-data-tester-g57gw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.386373938s
STEP: Saw pod success
May  1 00:10:33.818: INFO: Pod "pvc-snapshottable-data-tester-g57gw" satisfied condition "Succeeded or Failed"
May  1 00:10:33.956: INFO: Pod pvc-snapshottable-data-tester-g57gw has the following logs: 
May  1 00:10:33.956: INFO: Deleting pod "pvc-snapshottable-data-tester-g57gw" in namespace "snapshotting-6375"
May  1 00:10:34.025: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-g57gw" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
May  1 00:10:44.340: INFO: Running '/usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-6375 exec restored-pvc-tester-jn2jm --namespace=snapshotting-6375 -- cat /mnt/test/data'
... skipping 32 lines ...
May  1 00:11:09.805: INFO: volumesnapshotcontents snapcontent-b207220a-7f43-40db-a919-1a734a35de09 has been found and is not deleted
May  1 00:11:10.867: INFO: volumesnapshotcontents snapcontent-b207220a-7f43-40db-a919-1a734a35de09 has been found and is not deleted
May  1 00:11:11.930: INFO: volumesnapshotcontents snapcontent-b207220a-7f43-40db-a919-1a734a35de09 has been found and is not deleted
May  1 00:11:12.993: INFO: volumesnapshotcontents snapcontent-b207220a-7f43-40db-a919-1a734a35de09 has been found and is not deleted
May  1 00:11:14.056: INFO: volumesnapshotcontents snapcontent-b207220a-7f43-40db-a919-1a734a35de09 has been found and is not deleted
May  1 00:11:15.119: INFO: volumesnapshotcontents snapcontent-b207220a-7f43-40db-a919-1a734a35de09 has been found and is not deleted
May  1 00:11:16.120: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:221
May  1 00:11:16.183: INFO: Pod restored-pvc-tester-jn2jm has the following logs: 
May  1 00:11:16.184: INFO: Deleting pod "restored-pvc-tester-jn2jm" in namespace "snapshotting-6375"
May  1 00:11:16.247: INFO: Wait up to 5m0s for pod "restored-pvc-tester-jn2jm" to be fully deleted
May  1 00:11:54.372: INFO: deleting claim "snapshotting-6375"/"pvc-b82c5"
... skipping 52 lines ...

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
S
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  1 00:11:31.280: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
May  1 00:11:31.584: INFO: Creating resource for dynamic PV
May  1 00:11:31.584: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6614-e2e-scr2drb
STEP: creating a claim
May  1 00:11:31.646: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zj2w
STEP: Checking for subpath error in container status
May  1 00:11:47.956: INFO: Deleting pod "pod-subpath-test-dynamicpv-zj2w" in namespace "provisioning-6614"
May  1 00:11:48.018: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-zj2w" to be fully deleted
STEP: Deleting pod
May  1 00:11:54.139: INFO: Deleting pod "pod-subpath-test-dynamicpv-zj2w" in namespace "provisioning-6614"
STEP: Deleting pvc
May  1 00:11:54.324: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comk66x9"
... skipping 11 lines ...

• [SLOW TEST:38.545 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  Verify if offline PVC expansion works
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174

... skipping 313 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Driver "ebs.csi.aws.com" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 77 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "ebs.csi.aws.com" does not support volume type "InlineVolume" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 16 lines ...
May  1 00:12:02.757: INFO: Creating resource for dynamic PV
May  1 00:12:02.757: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-4657-e2e-sczwc9l
STEP: creating a claim
May  1 00:12:02.819: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: [init] starting a pod to use the claim
May  1 00:12:03.008: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-ndkj5" in namespace "snapshotting-4657" to be "Succeeded or Failed"
May  1 00:12:03.069: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 61.529618ms
May  1 00:12:05.131: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123830114s
May  1 00:12:07.193: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185885556s
May  1 00:12:09.256: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248864155s
May  1 00:12:11.319: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.310992129s
May  1 00:12:13.382: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.37393997s
May  1 00:12:15.443: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.435748591s
May  1 00:12:17.506: INFO: Pod "pvc-snapshottable-tester-ndkj5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.498513545s
STEP: Saw pod success
May  1 00:12:17.506: INFO: Pod "pvc-snapshottable-tester-ndkj5" satisfied condition "Succeeded or Failed"
May  1 00:12:17.637: INFO: Pod pvc-snapshottable-tester-ndkj5 has the following logs: 
May  1 00:12:17.638: INFO: Deleting pod "pvc-snapshottable-tester-ndkj5" in namespace "snapshotting-4657"
May  1 00:12:17.704: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-ndkj5" to be fully deleted
May  1 00:12:17.765: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [ebs.csi.aws.comv5h2n] to have phase Bound
May  1 00:12:17.827: INFO: PersistentVolumeClaim ebs.csi.aws.comv5h2n found and phase=Bound (61.807732ms)
STEP: [init] checking the claim
... skipping 28 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  1 00:12:55.647: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-j9xzb" in namespace "snapshotting-4657" to be "Succeeded or Failed"
May  1 00:12:55.708: INFO: Pod "pvc-snapshottable-data-tester-j9xzb": Phase="Pending", Reason="", readiness=false. Elapsed: 61.497225ms
May  1 00:12:57.771: INFO: Pod "pvc-snapshottable-data-tester-j9xzb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124082122s
May  1 00:12:59.835: INFO: Pod "pvc-snapshottable-data-tester-j9xzb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187749994s
May  1 00:13:01.898: INFO: Pod "pvc-snapshottable-data-tester-j9xzb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250823274s
May  1 00:13:03.960: INFO: Pod "pvc-snapshottable-data-tester-j9xzb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.3126361s
May  1 00:13:06.022: INFO: Pod "pvc-snapshottable-data-tester-j9xzb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.375463008s
STEP: Saw pod success
May  1 00:13:06.023: INFO: Pod "pvc-snapshottable-data-tester-j9xzb" satisfied condition "Succeeded or Failed"
May  1 00:13:06.154: INFO: Pod pvc-snapshottable-data-tester-j9xzb has the following logs: 
May  1 00:13:06.154: INFO: Deleting pod "pvc-snapshottable-data-tester-j9xzb" in namespace "snapshotting-4657"
May  1 00:13:06.223: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-j9xzb" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
May  1 00:13:20.535: INFO: Running '/usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-4657 exec restored-pvc-tester-r974n --namespace=snapshotting-4657 -- cat /mnt/test/data'
... skipping 56 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

    Driver "ebs.csi.aws.com" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 56 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver "ebs.csi.aws.com" does not support topology - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 582 lines ...
May  1 00:13:31.607: INFO: Creating resource for dynamic PV
May  1 00:13:31.607: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-5719-e2e-scjp5w8
STEP: creating a claim
May  1 00:13:31.674: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: [init] starting a pod to use the claim
May  1 00:13:31.874: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-9z78x" in namespace "snapshotting-5719" to be "Succeeded or Failed"
May  1 00:13:31.938: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 64.704696ms
May  1 00:13:34.004: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129871639s
May  1 00:13:36.069: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195011264s
May  1 00:13:38.134: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259958297s
May  1 00:13:40.200: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32628401s
May  1 00:13:42.265: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.391067243s
May  1 00:13:44.331: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.45683029s
May  1 00:13:46.396: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.522452332s
May  1 00:13:48.461: INFO: Pod "pvc-snapshottable-tester-9z78x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.587650667s
STEP: Saw pod success
May  1 00:13:48.461: INFO: Pod "pvc-snapshottable-tester-9z78x" satisfied condition "Succeeded or Failed"
May  1 00:13:48.593: INFO: Pod pvc-snapshottable-tester-9z78x has the following logs: 
May  1 00:13:48.593: INFO: Deleting pod "pvc-snapshottable-tester-9z78x" in namespace "snapshotting-5719"
May  1 00:13:48.665: INFO: Wait up to 5m0s for pod "pvc-snapshottable-tester-9z78x" to be fully deleted
May  1 00:13:48.729: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [ebs.csi.aws.comvmnbk] to have phase Bound
May  1 00:13:48.794: INFO: PersistentVolumeClaim ebs.csi.aws.comvmnbk found and phase=Bound (64.386087ms)
STEP: [init] checking the claim
... skipping 46 lines ...
[It] should check snapshot fields, check restore correctly works after modifying source data, check deletion
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:243
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  1 00:14:27.384: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-rjkts" in namespace "snapshotting-5719" to be "Succeeded or Failed"
May  1 00:14:27.454: INFO: Pod "pvc-snapshottable-data-tester-rjkts": Phase="Pending", Reason="", readiness=false. Elapsed: 70.282624ms
May  1 00:14:29.519: INFO: Pod "pvc-snapshottable-data-tester-rjkts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13498499s
May  1 00:14:31.585: INFO: Pod "pvc-snapshottable-data-tester-rjkts": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200823s
May  1 00:14:33.650: INFO: Pod "pvc-snapshottable-data-tester-rjkts": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266023595s
May  1 00:14:35.715: INFO: Pod "pvc-snapshottable-data-tester-rjkts": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330960556s
May  1 00:14:37.781: INFO: Pod "pvc-snapshottable-data-tester-rjkts": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397166035s
STEP: Saw pod success
May  1 00:14:37.781: INFO: Pod "pvc-snapshottable-data-tester-rjkts" satisfied condition "Succeeded or Failed"
May  1 00:14:37.912: INFO: Pod pvc-snapshottable-data-tester-rjkts has the following logs: 
May  1 00:14:37.912: INFO: Deleting pod "pvc-snapshottable-data-tester-rjkts" in namespace "snapshotting-5719"
May  1 00:14:37.982: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-rjkts" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the claim
May  1 00:15:04.306: INFO: Running '/usr/local/bin/kubectl --server=https://api-test-cluster-17373-k8-dj5e2d-1793600245.us-west-2.elb.amazonaws.com --kubeconfig=/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig --namespace=snapshotting-5719 exec restored-pvc-tester-4fwj2 --namespace=snapshotting-5719 -- cat /mnt/test/data'
... skipping 64 lines ...

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SS
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  1 00:15:51.539: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
May  1 00:15:51.863: INFO: Creating resource for dynamic PV
May  1 00:15:51.863: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-2775-e2e-scfwvtb
STEP: creating a claim
STEP: Creating pod
STEP: Waiting for the pod to fail
May  1 00:15:58.324: INFO: Deleting pod "pod-b989052e-9873-45db-aafe-593049840b6c" in namespace "volumemode-2775"
May  1 00:15:58.390: INFO: Wait up to 5m0s for pod "pod-b989052e-9873-45db-aafe-593049840b6c" to be fully deleted
STEP: Deleting pvc
May  1 00:16:06.648: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.commxp9d"
May  1 00:16:06.714: INFO: Waiting up to 5m0s for PersistentVolume pvc-55520613-db5e-477a-a6aa-57bcfbe0f310 to get deleted
May  1 00:16:06.778: INFO: PersistentVolume pvc-55520613-db5e-477a-a6aa-57bcfbe0f310 found and phase=Released (64.719134ms)
... skipping 8 lines ...

• [SLOW TEST:25.566 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
SSS
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes 
  should store data
... skipping 139 lines ...
May  1 00:16:17.434: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(ebs.csi.aws.com) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-2628-e2e-sckk7nx
STEP: creating a claim
May  1 00:16:17.500: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
May  1 00:16:17.631: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
May  1 00:16:17.761: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:19.893: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:21.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:23.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:25.905: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:27.894: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:29.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:31.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:33.908: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:35.895: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:37.896: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:39.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:41.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:43.892: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:45.891: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:47.894: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2628-e2e-sckk7nx",
  	... // 2 identical fields
  }

May  1 00:16:48.025: INFO: Error updating pvc ebs.csi.aws.comw9lnc: PersistentVolumeClaim "ebs.csi.aws.comw9lnc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 53 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver ebs.csi.aws.com doesn't support ntfs -- skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
... skipping 88 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "ebs.csi.aws.com" does not support volume type "PreprovisionedPV" - skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
... skipping 258 lines ...
May  1 00:16:41.356: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8983-e2e-sc8z96v
STEP: creating a claim
May  1 00:16:41.418: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-5ds4
STEP: Creating a pod to test subpath
May  1 00:16:41.602: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5ds4" in namespace "provisioning-8983" to be "Succeeded or Failed"
May  1 00:16:41.663: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 60.547154ms
May  1 00:16:43.724: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121612897s
May  1 00:16:45.785: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183128558s
May  1 00:16:47.847: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244630324s
May  1 00:16:49.909: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306479294s
May  1 00:16:51.971: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.368611203s
May  1 00:16:54.036: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.433740094s
May  1 00:16:56.098: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.495672264s
May  1 00:16:58.160: INFO: Pod "pod-subpath-test-dynamicpv-5ds4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.55794283s
STEP: Saw pod success
May  1 00:16:58.160: INFO: Pod "pod-subpath-test-dynamicpv-5ds4" satisfied condition "Succeeded or Failed"
May  1 00:16:58.221: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-5ds4 container test-container-volume-dynamicpv-5ds4: <nil>
STEP: delete the pod
May  1 00:16:58.384: INFO: Waiting for pod pod-subpath-test-dynamicpv-5ds4 to disappear
May  1 00:16:58.444: INFO: Pod pod-subpath-test-dynamicpv-5ds4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5ds4
May  1 00:16:58.444: INFO: Deleting pod "pod-subpath-test-dynamicpv-5ds4" in namespace "provisioning-8983"
... skipping 272 lines ...

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:255
------------------------------
SSS
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  1 00:16:48.379: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
May  1 00:16:48.700: INFO: Creating resource for dynamic PV
May  1 00:16:48.700: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4058-e2e-scqw2vc
STEP: creating a claim
May  1 00:16:48.766: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-n6r4
STEP: Checking for subpath error in container status
May  1 00:17:11.115: INFO: Deleting pod "pod-subpath-test-dynamicpv-n6r4" in namespace "provisioning-4058"
May  1 00:17:11.181: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-n6r4" to be fully deleted
STEP: Deleting pod
May  1 00:17:15.312: INFO: Deleting pod "pod-subpath-test-dynamicpv-n6r4" in namespace "provisioning-4058"
STEP: Deleting pvc
May  1 00:17:15.506: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comrrp54"
... skipping 15 lines ...

• [SLOW TEST:62.936 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
... skipping 103 lines ...
May  1 00:17:46.869: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-4132-e2e-scnhkdb      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:ebs.csi.aws.com,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4132    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4132-e2e-scnhkdb,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4132    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4132-e2e-scnhkdb,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4132    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4132-e2e-scnhkdb,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-4132-e2e-scnhkdb    53de9862-9256-418f-b645-3641ad906b97 6889 0 2021-05-01 00:17:46 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e-kubernetes.test Update storage.k8s.io/v1 2021-05-01 00:17:46 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:ebs.csi.aws.com,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[dirsync],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-nkx58 pvc- provisioning-4132  3d914845-a9d8-4d28-86b7-ece5be57c1ae 6890 0 2021-05-01 00:17:47 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e-kubernetes.test Update v1 2021-05-01 00:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}},"f:status":{"f:phase":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4132-e2e-scnhkdb,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-b02853b1-3ee2-49d1-a113-e913bba65492 in namespace provisioning-4132
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May  1 00:18:09.600: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-m8g49" in namespace "provisioning-4132" to be "Succeeded or Failed"
May  1 00:18:09.665: INFO: Pod "pvc-volume-tester-writer-m8g49": Phase="Pending", Reason="", readiness=false. Elapsed: 65.41215ms
May  1 00:18:11.732: INFO: Pod "pvc-volume-tester-writer-m8g49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.132018009s
STEP: Saw pod success
May  1 00:18:11.732: INFO: Pod "pvc-volume-tester-writer-m8g49" satisfied condition "Succeeded or Failed"
May  1 00:18:11.865: INFO: Pod pvc-volume-tester-writer-m8g49 has the following logs: 
May  1 00:18:11.866: INFO: Deleting pod "pvc-volume-tester-writer-m8g49" in namespace "provisioning-4132"
May  1 00:18:11.937: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-m8g49" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-43-157.us-west-2.compute.internal"
May  1 00:18:12.202: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-p6dzm" in namespace "provisioning-4132" to be "Succeeded or Failed"
May  1 00:18:12.267: INFO: Pod "pvc-volume-tester-reader-p6dzm": Phase="Pending", Reason="", readiness=false. Elapsed: 65.454981ms
May  1 00:18:14.334: INFO: Pod "pvc-volume-tester-reader-p6dzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13215274s
May  1 00:18:16.401: INFO: Pod "pvc-volume-tester-reader-p6dzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.198873408s
STEP: Saw pod success
May  1 00:18:16.401: INFO: Pod "pvc-volume-tester-reader-p6dzm" satisfied condition "Succeeded or Failed"
May  1 00:18:16.534: INFO: Pod pvc-volume-tester-reader-p6dzm has the following logs: hello world

May  1 00:18:16.534: INFO: Deleting pod "pvc-volume-tester-reader-p6dzm" in namespace "provisioning-4132"
May  1 00:18:16.606: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-p6dzm" to be fully deleted
May  1 00:18:16.671: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nkx58] to have phase Bound
May  1 00:18:16.737: INFO: PersistentVolumeClaim pvc-nkx58 found and phase=Bound (65.73064ms)
... skipping 206 lines ...
May  1 00:18:10.743: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7019-e2e-scslbhx
STEP: creating a claim
May  1 00:18:10.807: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-x7g7
STEP: Creating a pod to test subpath
May  1 00:18:10.997: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-x7g7" in namespace "provisioning-7019" to be "Succeeded or Failed"
May  1 00:18:11.058: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 61.204738ms
May  1 00:18:13.121: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124345871s
May  1 00:18:15.184: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187611233s
May  1 00:18:17.247: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250063979s
May  1 00:18:19.314: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316981321s
May  1 00:18:21.376: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.379524003s
May  1 00:18:23.439: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.442271683s
May  1 00:18:25.501: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.504421663s
May  1 00:18:27.563: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.566507542s
STEP: Saw pod success
May  1 00:18:27.563: INFO: Pod "pod-subpath-test-dynamicpv-x7g7" satisfied condition "Succeeded or Failed"
May  1 00:18:27.625: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-x7g7 container test-container-subpath-dynamicpv-x7g7: <nil>
STEP: delete the pod
May  1 00:18:27.758: INFO: Waiting for pod pod-subpath-test-dynamicpv-x7g7 to disappear
May  1 00:18:27.820: INFO: Pod pod-subpath-test-dynamicpv-x7g7 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-x7g7
May  1 00:18:27.820: INFO: Deleting pod "pod-subpath-test-dynamicpv-x7g7" in namespace "provisioning-7019"
STEP: Creating pod pod-subpath-test-dynamicpv-x7g7
STEP: Creating a pod to test subpath
May  1 00:18:27.947: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-x7g7" in namespace "provisioning-7019" to be "Succeeded or Failed"
May  1 00:18:28.011: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 63.117874ms
May  1 00:18:30.074: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126571931s
May  1 00:18:32.136: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188663125s
May  1 00:18:34.199: INFO: Pod "pod-subpath-test-dynamicpv-x7g7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.251618168s
STEP: Saw pod success
May  1 00:18:34.199: INFO: Pod "pod-subpath-test-dynamicpv-x7g7" satisfied condition "Succeeded or Failed"
May  1 00:18:34.261: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-x7g7 container test-container-subpath-dynamicpv-x7g7: <nil>
STEP: delete the pod
May  1 00:18:34.392: INFO: Waiting for pod pod-subpath-test-dynamicpv-x7g7 to disappear
May  1 00:18:34.453: INFO: Pod pod-subpath-test-dynamicpv-x7g7 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-x7g7
May  1 00:18:34.453: INFO: Deleting pod "pod-subpath-test-dynamicpv-x7g7" in namespace "provisioning-7019"
... skipping 34 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Driver ebs.csi.aws.com doesn't support ntfs -- skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
... skipping 112 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver ebs.csi.aws.com doesn't support ntfs -- skipping

    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
... skipping 17 lines ...
May  1 00:19:10.768: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2137-e2e-sc8v9n6
STEP: creating a claim
May  1 00:19:10.831: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-59w2
STEP: Creating a pod to test subpath
May  1 00:19:11.019: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-59w2" in namespace "provisioning-2137" to be "Succeeded or Failed"
May  1 00:19:11.080: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 61.347318ms
May  1 00:19:13.143: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124378564s
May  1 00:19:15.206: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187584288s
May  1 00:19:17.268: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249289077s
May  1 00:19:19.331: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311863792s
May  1 00:19:21.393: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.374591457s
May  1 00:19:23.456: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.436959258s
May  1 00:19:25.518: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.499557945s
May  1 00:19:27.581: INFO: Pod "pod-subpath-test-dynamicpv-59w2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.562343979s
STEP: Saw pod success
May  1 00:19:27.581: INFO: Pod "pod-subpath-test-dynamicpv-59w2" satisfied condition "Succeeded or Failed"
May  1 00:19:27.643: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-59w2 container test-container-subpath-dynamicpv-59w2: <nil>
STEP: delete the pod
May  1 00:19:27.789: INFO: Waiting for pod pod-subpath-test-dynamicpv-59w2 to disappear
May  1 00:19:27.852: INFO: Pod pod-subpath-test-dynamicpv-59w2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-59w2
May  1 00:19:27.852: INFO: Deleting pod "pod-subpath-test-dynamicpv-59w2" in namespace "provisioning-2137"
... skipping 505 lines ...
May  1 00:19:43.933: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1037-e2e-sczgt8x
STEP: creating a claim
May  1 00:19:43.996: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-xcg9
STEP: Creating a pod to test subpath
May  1 00:19:44.186: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xcg9" in namespace "provisioning-1037" to be "Succeeded or Failed"
May  1 00:19:44.247: INFO: Pod "pod-subpath-test-dynamicpv-xcg9": Phase="Pending", Reason="", readiness=false. Elapsed: 61.043512ms
May  1 00:19:46.309: INFO: Pod "pod-subpath-test-dynamicpv-xcg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123101955s
May  1 00:19:48.372: INFO: Pod "pod-subpath-test-dynamicpv-xcg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186487051s
May  1 00:19:50.435: INFO: Pod "pod-subpath-test-dynamicpv-xcg9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249135344s
May  1 00:19:52.497: INFO: Pod "pod-subpath-test-dynamicpv-xcg9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.310944009s
May  1 00:19:54.559: INFO: Pod "pod-subpath-test-dynamicpv-xcg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37341852s
STEP: Saw pod success
May  1 00:19:54.559: INFO: Pod "pod-subpath-test-dynamicpv-xcg9" satisfied condition "Succeeded or Failed"
May  1 00:19:54.621: INFO: Trying to get logs from node ip-172-20-43-157.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-xcg9 container test-container-volume-dynamicpv-xcg9: <nil>
STEP: delete the pod
May  1 00:19:54.759: INFO: Waiting for pod pod-subpath-test-dynamicpv-xcg9 to disappear
May  1 00:19:54.821: INFO: Pod pod-subpath-test-dynamicpv-xcg9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xcg9
May  1 00:19:54.821: INFO: Deleting pod "pod-subpath-test-dynamicpv-xcg9" in namespace "provisioning-1037"
... skipping 85 lines ...
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  1 00:20:10.592: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-17373.k8s.local.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
May  1 00:20:10.899: INFO: Creating resource for dynamic PV
May  1 00:20:10.899: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9857-e2e-sc8lphh
STEP: creating a claim
May  1 00:20:10.961: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-b7gc
STEP: Checking for subpath error in container status
May  1 00:20:27.275: INFO: Deleting pod "pod-subpath-test-dynamicpv-b7gc" in namespace "provisioning-9857"
May  1 00:20:27.337: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-b7gc" to be fully deleted
STEP: Deleting pod
May  1 00:20:35.460: INFO: Deleting pod "pod-subpath-test-dynamicpv-b7gc" in namespace "provisioning-9857"
STEP: Deleting pvc
May  1 00:20:35.645: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comlz2j9"
... skipping 9 lines ...

• [SLOW TEST:30.428 seconds]
External Storage [Driver: ebs.csi.aws.com]
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/external/external.go:175
  [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly]
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
------------------------------
External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes 
  should store data
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

... skipping 207 lines ...
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:200
------------------------------


Summarizing 1 Failure:

[Fail] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  [It] should check snapshot fields, check restore correctly works after modifying source data, check deletion 
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/util.go:602

Ran 57 of 485 Specs in 1001.900 seconds
FAIL! -- 56 Passed | 1 Failed | 0 Pending | 428 Skipped


Ginkgo ran 1 suite in 18m7.007403622s
Test Suite Failed
+ TEST_PASSED=1
+ set -e
+ set +x
###
## TEST_PASSED: 1
#
###
## Printing pod ebs-csi-controller-5fd589c646-cszp5 ebs-plugin container logs
#
I0501 00:02:56.952404       1 driver.go:72] Driver: ebs.csi.aws.com Version: v1.0.0
I0501 00:02:56.952483       1 controller.go:80] [Debug] Retrieving region from metadata service
W0501 00:03:00.202874       1 metadata.go:188] Failed to parse the outpost arn: 
I0501 00:03:00.203700       1 driver.go:142] Listening for connections on address: &net.UnixAddr{Name:"/var/lib/csi/sockets/pluginproxy/csi.sock", Net:"unix"}
I0501 00:03:01.008677       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:01.588119       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:01.588755       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:04.291234       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:06.270214       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 138 lines ...
I0501 00:05:36.648336       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0d16c50f811ceeeb9 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:05:38.455370       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-02cf2c6d3564299d7
I0501 00:05:38.455401       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-02cf2c6d3564299d7 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
I0501 00:05:40.411217       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-077425548893ba300 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:05:40.874954       1 cloud.go:588] Waiting for volume "vol-077425548893ba300" state: actual=detaching, desired=detached
I0501 00:05:41.951255       1 cloud.go:588] Waiting for volume "vol-077425548893ba300" state: actual=detaching, desired=detached
W0501 00:05:43.597036       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:05:43.834000       1 cloud.go:588] Waiting for volume "vol-077425548893ba300" state: actual=detaching, desired=detached
I0501 00:05:47.138475       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-077425548893ba300 detached from node i-0bb146f105471fb17
I0501 00:05:48.729607       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0dd1a0138c53cd965 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:05:49.213816       1 cloud.go:415] [Debug] AttachVolume volume="vol-0dd1a0138c53cd965" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:05:49.203 +0000 UTC,
  Device: "/dev/xvdba",
... skipping 21 lines ...
  State: "attaching",
  VolumeId: "vol-04e22b10f1f4cebb6"
}
I0501 00:05:55.063920       1 cloud.go:588] Waiting for volume "vol-04e22b10f1f4cebb6" state: actual=attaching, desired=attached
I0501 00:05:56.218645       1 cloud.go:588] Waiting for volume "vol-04e22b10f1f4cebb6" state: actual=attaching, desired=attached
I0501 00:05:56.680713       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-024be26c903962561 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0501 00:05:57.019800       1 manager.go:44] Error releasing device: release on device "/dev/xvdbc" assigned to different volume: "vol-0d16c50f811ceeeb9" vs "vol-024be26c903962561"
I0501 00:05:57.019831       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0d16c50f811ceeeb9 detached from node i-0bb146f105471fb17
I0501 00:05:57.085819       1 cloud.go:588] Waiting for volume "vol-02cf2c6d3564299d7" state: actual=detaching, desired=detached
I0501 00:05:57.212742       1 cloud.go:415] [Debug] AttachVolume volume="vol-024be26c903962561" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:05:57.202 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-0bb146f105471fb17",
... skipping 5 lines ...
I0501 00:05:58.436702       1 cloud.go:588] Waiting for volume "vol-024be26c903962561" state: actual=attaching, desired=attached
I0501 00:06:00.347084       1 cloud.go:588] Waiting for volume "vol-024be26c903962561" state: actual=attaching, desired=attached
I0501 00:06:01.459603       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-04e22b10f1f4cebb6
I0501 00:06:01.459636       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-04e22b10f1f4cebb6 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:06:01.472889       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-04e22b10f1f4cebb6 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:06:01.733686       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-04e22b10f1f4cebb6 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
W0501 00:06:02.493131       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:06:02.981459       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-02cf2c6d3564299d7 detached from node i-0bb146f105471fb17
I0501 00:06:03.680221       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-024be26c903962561
I0501 00:06:03.680256       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-024be26c903962561 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:06:03.687960       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-024be26c903962561 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:06:03.935712       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-024be26c903962561 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:06:10.435924       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-04e22b10f1f4cebb6 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 52 lines ...
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-09a1f07d810c92993"
}
I0501 00:06:35.141982       1 cloud.go:588] Waiting for volume "vol-09a1f07d810c92993" state: actual=attaching, desired=attached
I0501 00:06:36.237648       1 cloud.go:588] Waiting for volume "vol-09a1f07d810c92993" state: actual=attaching, desired=attached
W0501 00:06:36.505740       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:06:38.142404       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-09a1f07d810c92993
I0501 00:06:38.142435       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-09a1f07d810c92993 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:06:40.424852       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0dd1a0138c53cd965 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:06:40.990335       1 cloud.go:588] Waiting for volume "vol-0dd1a0138c53cd965" state: actual=detaching, desired=detached
I0501 00:06:42.074703       1 cloud.go:588] Waiting for volume "vol-0dd1a0138c53cd965" state: actual=detaching, desired=detached
I0501 00:06:44.301068       1 cloud.go:588] Waiting for volume "vol-0dd1a0138c53cd965" state: actual=detaching, desired=detached
... skipping 99 lines ...
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-0e0f33949f3512cfd"
}
I0501 00:07:36.631674       1 cloud.go:588] Waiting for volume "vol-0e0f33949f3512cfd" state: actual=attaching, desired=attached
I0501 00:07:37.253275       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-06377a23494ddf564 detached from node i-0bb146f105471fb17
W0501 00:07:37.728105       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:07:37.741566       1 cloud.go:588] Waiting for volume "vol-0e0f33949f3512cfd" state: actual=attaching, desired=attached
I0501 00:07:37.930321       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-09c4d731370e9d325
I0501 00:07:37.930351       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-09c4d731370e9d325 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:07:37.936653       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-09c4d731370e9d325 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:07:38.216258       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-09c4d731370e9d325 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:07:39.636422       1 cloud.go:588] Waiting for volume "vol-0e0f33949f3512cfd" state: actual=attaching, desired=attached
... skipping 29 lines ...
  AttachTime: 2021-05-01 00:08:04.739 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-0356ba99ff5b54822"
}
E0501 00:08:04.755421       1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-0afc9022e5627846f" vs "vol-0356ba99ff5b54822"
I0501 00:08:04.755490       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0afc9022e5627846f detached from node i-0bb146f105471fb17
I0501 00:08:04.873790       1 cloud.go:588] Waiting for volume "vol-0356ba99ff5b54822" state: actual=attaching, desired=attached
I0501 00:08:05.961426       1 cloud.go:588] Waiting for volume "vol-0356ba99ff5b54822" state: actual=attaching, desired=attached
I0501 00:08:07.873602       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0356ba99ff5b54822
I0501 00:08:07.873633       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0356ba99ff5b54822 attached to node i-0bb146f105471fb17 through device /dev/xvdba
I0501 00:08:10.469649       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-09c4d731370e9d325 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 109 lines ...
I0501 00:09:23.955921       1 cloud.go:588] Waiting for volume "vol-0d3e1b8aea95cb842" state: actual=detaching, desired=detached
I0501 00:09:24.205164       1 cloud.go:588] Waiting for volume "vol-077c1ba362c5edb25" state: actual=detaching, desired=detached
I0501 00:09:27.346765       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0d3e1b8aea95cb842 detached from node i-0bb146f105471fb17
I0501 00:09:27.356870       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0d3e1b8aea95cb842 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:09:27.456410       1 cloud.go:462] DetachDisk called on non-attached volume: vol-0d3e1b8aea95cb842
I0501 00:09:27.551229       1 cloud.go:588] Waiting for volume "vol-077c1ba362c5edb25" state: actual=detaching, desired=detached
W0501 00:09:27.928030       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:09:30.592571       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-02774fa64daa6928c NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:09:30.598370       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0b01f1d8783b80e54 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:09:31.043832       1 cloud.go:588] Waiting for volume "vol-02774fa64daa6928c" state: actual=detaching, desired=detached
I0501 00:09:31.256587       1 cloud.go:588] Waiting for volume "vol-0b01f1d8783b80e54" state: actual=detaching, desired=detached
I0501 00:09:32.137036       1 cloud.go:588] Waiting for volume "vol-02774fa64daa6928c" state: actual=detaching, desired=detached
I0501 00:09:32.369643       1 cloud.go:588] Waiting for volume "vol-0b01f1d8783b80e54" state: actual=detaching, desired=detached
... skipping 308 lines ...
I0501 00:11:36.481633       1 cloud.go:588] Waiting for volume "vol-0120610707d6f5163" state: actual=attaching, desired=attached
I0501 00:11:37.274893       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0d674488568bdaf40
I0501 00:11:37.274926       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0d674488568bdaf40 attached to node i-0bb146f105471fb17 through device /dev/xvdba
I0501 00:11:37.587061       1 cloud.go:588] Waiting for volume "vol-0120610707d6f5163" state: actual=attaching, desired=attached
I0501 00:11:39.507709       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0120610707d6f5163
I0501 00:11:39.507740       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0120610707d6f5163 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
W0501 00:11:43.892618       1 cloud.go:529] Ignoring error from describe volume for volume "vol-02d8303e7637fce6d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:11:50.715657       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0453eb211fb3fea45 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:11:50.728146       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0d674488568bdaf40 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:11:50.731149       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0120610707d6f5163 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:11:51.068159       1 cloud.go:588] Waiting for volume "vol-0453eb211fb3fea45" state: actual=detaching, desired=detached
I0501 00:11:51.083399       1 cloud.go:588] Waiting for volume "vol-0120610707d6f5163" state: actual=detaching, desired=detached
I0501 00:11:51.090066       1 cloud.go:588] Waiting for volume "vol-0d674488568bdaf40" state: actual=detaching, desired=detached
... skipping 15 lines ...
  State: "attaching",
  VolumeId: "vol-0fd6715ad49ac9248"
}
I0501 00:11:59.521515       1 cloud.go:588] Waiting for volume "vol-0fd6715ad49ac9248" state: actual=attaching, desired=attached
I0501 00:12:00.628327       1 cloud.go:588] Waiting for volume "vol-0fd6715ad49ac9248" state: actual=attaching, desired=attached
I0501 00:12:02.535552       1 cloud.go:588] Waiting for volume "vol-0fd6715ad49ac9248" state: actual=attaching, desired=attached
W0501 00:12:02.788627       1 cloud.go:529] Ignoring error from describe volume for volume "vol-02d8303e7637fce6d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:12:03.326408       1 cloud.go:588] Waiting for volume "vol-0120610707d6f5163" state: actual=detaching, desired=detached
I0501 00:12:03.365337       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0d674488568bdaf40 detached from node i-0bb146f105471fb17
I0501 00:12:05.739846       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0120610707d6f5163 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:12:05.789825       1 cloud.go:462] DetachDisk called on non-attached volume: vol-0120610707d6f5163
I0501 00:12:05.883948       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0fd6715ad49ac9248
I0501 00:12:05.883973       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0fd6715ad49ac9248 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
... skipping 12 lines ...
I0501 00:12:09.073480       1 controller.go:414] ControllerExpandVolume: called with args {VolumeId:vol-0fd6715ad49ac9248 CapacityRange:required_bytes:2147483648  Secrets:map[] VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:12:09.226266       1 cloud.go:1038] expanding volume "vol-0fd6715ad49ac9248" to size 2
I0501 00:12:10.631924       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-069b260eaed1775f0
I0501 00:12:10.631954       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-069b260eaed1775f0 attached to node i-0bb146f105471fb17 through device /dev/xvdba
I0501 00:12:10.637344       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-069b260eaed1775f0 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:12:10.900998       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-069b260eaed1775f0 attached to node i-0bb146f105471fb17 through device /dev/xvdba
W0501 00:12:13.824305       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0120610707d6f5163"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:12:14.461352       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0477d66cd72e288e2 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:12:15.060848       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-07752b6ed65c5279f NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:12:15.106745       1 cloud.go:415] [Debug] AttachVolume volume="vol-0477d66cd72e288e2" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:12:15.078 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-0bb146f105471fb17",
... skipping 80 lines ...
I0501 00:12:31.252877       1 cloud.go:588] Waiting for volume "vol-0477d66cd72e288e2" state: actual=detaching, desired=detached
I0501 00:12:31.673775       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:31.709645       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:32.272486       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:32.311224       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:32.331697       1 cloud.go:588] Waiting for volume "vol-0477d66cd72e288e2" state: actual=detaching, desired=detached
W0501 00:12:32.720600       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0120610707d6f5163"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:12:32.885563       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:32.915876       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:33.473551       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:33.506340       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:34.072357       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:34.110802       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
... skipping 6 lines ...
I0501 00:12:35.272195       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:35.308939       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:35.873312       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:35.913873       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:36.472047       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:36.507852       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
W0501 00:12:36.801308       1 cloud.go:529] Ignoring error from describe volume for volume "vol-02d8303e7637fce6d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:12:37.071878       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:37.105622       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:37.560551       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0477d66cd72e288e2 detached from node i-0bb146f105471fb17
I0501 00:12:37.655300       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0477d66cd72e288e2 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:12:37.671259       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:37.720934       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
... skipping 42 lines ...
  State: "attaching",
  VolumeId: "vol-0be85935ec20084ec"
}
I0501 00:12:46.071915       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:46.080786       1 cloud.go:588] Waiting for volume "vol-0be85935ec20084ec" state: actual=attaching, desired=attached
I0501 00:12:46.116573       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
W0501 00:12:46.287633       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:12:46.671072       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:46.707530       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:47.160132       1 cloud.go:588] Waiting for volume "vol-0be85935ec20084ec" state: actual=attaching, desired=attached
I0501 00:12:47.272227       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
I0501 00:12:47.310880       1 controller.go:503] Snapshot snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b of volume vol-069b260eaed1775f0 already exists; nothing to do
I0501 00:12:47.871649       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-069b260eaed1775f0" name:"snapshot-cbc11a63-a313-4a5d-9c68-f34990712f1b" 
... skipping 33 lines ...
I0501 00:12:57.395893       1 cloud.go:588] Waiting for volume "vol-069b260eaed1775f0" state: actual=attaching, desired=attached
I0501 00:12:57.463325       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0477d66cd72e288e2 detached from node i-0bb146f105471fb17
I0501 00:12:57.476088       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0477d66cd72e288e2 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:12:57.519625       1 cloud.go:462] DetachDisk called on non-attached volume: vol-0477d66cd72e288e2
I0501 00:12:59.338437       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-069b260eaed1775f0
I0501 00:12:59.338468       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-069b260eaed1775f0 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
W0501 00:13:06.733209       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0120610707d6f5163"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:13:08.889624       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0ab888a31985d3179 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:08.891277       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-01408ce586d6f4d5d NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:09.464180       1 cloud.go:415] [Debug] AttachVolume volume="vol-0ab888a31985d3179" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:13:09.453 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-0bb146f105471fb17",
... skipping 51 lines ...
I0501 00:13:34.263621       1 cloud.go:588] Waiting for volume "vol-01408ce586d6f4d5d" state: actual=detaching, desired=detached
I0501 00:13:37.583183       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-01408ce586d6f4d5d detached from node i-0bb146f105471fb17
I0501 00:13:37.604390       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-01408ce586d6f4d5d NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:37.618318       1 cloud.go:588] Waiting for volume "vol-0ab888a31985d3179" state: actual=detaching, desired=detached
W0501 00:13:37.653853       1 cloud.go:462] DetachDisk called on non-attached volume: vol-01408ce586d6f4d5d
I0501 00:13:37.763199       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-01408ce586d6f4d5d NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:13:38.023589       1 cloud.go:529] Ignoring error from describe volume for volume "vol-02d8303e7637fce6d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:13:38.236874       1 cloud.go:415] [Debug] AttachVolume volume="vol-01408ce586d6f4d5d" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:13:38.226 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-01408ce586d6f4d5d"
... skipping 107 lines ...
I0501 00:14:06.297718       1 controller.go:503] Snapshot snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b of volume vol-0431c19b07f2b5910 already exists; nothing to do
I0501 00:14:06.865294       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-0431c19b07f2b5910" name:"snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b" 
I0501 00:14:06.895200       1 controller.go:503] Snapshot snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b of volume vol-0431c19b07f2b5910 already exists; nothing to do
I0501 00:14:07.465218       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-0431c19b07f2b5910" name:"snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b" 
I0501 00:14:07.502487       1 controller.go:503] Snapshot snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b of volume vol-0431c19b07f2b5910 already exists; nothing to do
I0501 00:14:07.634810       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-03c0c58842c64ed53 detached from node i-0bb146f105471fb17
W0501 00:14:07.955576       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0120610707d6f5163"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:14:08.065295       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-0431c19b07f2b5910" name:"snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b" 
I0501 00:14:08.102784       1 controller.go:503] Snapshot snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b of volume vol-0431c19b07f2b5910 already exists; nothing to do
I0501 00:14:08.665975       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-0431c19b07f2b5910" name:"snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b" 
I0501 00:14:08.701453       1 controller.go:503] Snapshot snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b of volume vol-0431c19b07f2b5910 already exists; nothing to do
I0501 00:14:09.272658       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-0431c19b07f2b5910" name:"snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b" 
I0501 00:14:09.310172       1 controller.go:503] Snapshot snapshot-7389cc42-2406-453c-a899-b1b9a8e8558b of volume vol-0431c19b07f2b5910 already exists; nothing to do
... skipping 119 lines ...
I0501 00:14:30.169470       1 cloud.go:588] Waiting for volume "vol-0142e4536a9aaab5b" state: actual=attaching, desired=attached
I0501 00:14:31.147363       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0431c19b07f2b5910
I0501 00:14:31.147393       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0431c19b07f2b5910 attached to node i-0bb146f105471fb17 through device /dev/xvdba
I0501 00:14:31.245176       1 cloud.go:588] Waiting for volume "vol-0142e4536a9aaab5b" state: actual=attaching, desired=attached
I0501 00:14:33.167591       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0142e4536a9aaab5b
I0501 00:14:33.167624       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0142e4536a9aaab5b attached to node i-0bb146f105471fb17 through device /dev/xvdbb
W0501 00:14:34.207149       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0ab888a31985d3179"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:14:40.785396       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-07b187ca552d58e9f NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:40.786685       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-057288b0195698f7b NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:40.981415       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0aba1e65730b3ba25 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:40.984697       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-072665c0a3769c2eb NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:40.989344       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0431c19b07f2b5910 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:41.259639       1 cloud.go:415] [Debug] AttachVolume volume="vol-07b187ca552d58e9f" instance="i-0bb146f105471fb17" request returned {
... skipping 42 lines ...
I0501 00:14:48.096360       1 cloud.go:588] Waiting for volume "vol-0431c19b07f2b5910" state: actual=detaching, desired=detached
I0501 00:14:48.869051       1 cloud.go:588] Waiting for volume "vol-086513f43f86930cd" state: actual=attaching, desired=attached
I0501 00:14:50.927080       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0142e4536a9aaab5b NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:51.403172       1 cloud.go:588] Waiting for volume "vol-0142e4536a9aaab5b" state: actual=detaching, desired=detached
I0501 00:14:52.225974       1 cloud.go:588] Waiting for volume "vol-086513f43f86930cd" state: actual=attaching, desired=attached
I0501 00:14:52.482678       1 cloud.go:588] Waiting for volume "vol-0142e4536a9aaab5b" state: actual=detaching, desired=detached
W0501 00:14:53.103174       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0ab888a31985d3179"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:14:53.575774       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0aba1e65730b3ba25 detached from node i-0bb146f105471fb17
I0501 00:14:53.586867       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0aba1e65730b3ba25 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:14:53.591495       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-072665c0a3769c2eb detached from node i-0bb146f105471fb17
I0501 00:14:53.637202       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-072665c0a3769c2eb NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:14:53.679735       1 cloud.go:462] DetachDisk called on non-attached volume: vol-0aba1e65730b3ba25
I0501 00:14:53.854348       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0aba1e65730b3ba25 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 27 lines ...
I0501 00:14:58.151910       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-086513f43f86930cd attached to node i-0bb146f105471fb17 through device /dev/xvdbg
I0501 00:15:00.592262       1 cloud.go:588] Waiting for volume "vol-072665c0a3769c2eb" state: actual=attaching, desired=attached
I0501 00:15:00.874846       1 cloud.go:588] Waiting for volume "vol-0aba1e65730b3ba25" state: actual=attaching, desired=attached
I0501 00:15:03.594624       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0142e4536a9aaab5b detached from node i-0bb146f105471fb17
I0501 00:15:03.608615       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0142e4536a9aaab5b NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:15:03.663678       1 cloud.go:462] DetachDisk called on non-attached volume: vol-0142e4536a9aaab5b
W0501 00:15:04.515883       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0431c19b07f2b5910"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:15:05.121611       1 controller.go:532] DeleteSnapshot: called with args snapshot_id:"snap-0a94679e333706f94" 
I0501 00:15:06.540621       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-072665c0a3769c2eb
I0501 00:15:06.540653       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-072665c0a3769c2eb attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:15:06.549099       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-072665c0a3769c2eb NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:15:06.858455       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbe -> volume vol-0aba1e65730b3ba25
I0501 00:15:06.858481       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0aba1e65730b3ba25 attached to node i-0bb146f105471fb17 through device /dev/xvdbe
... skipping 28 lines ...
I0501 00:15:22.282824       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-07909263cb03714b3
I0501 00:15:22.282853       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-07909263cb03714b3 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
I0501 00:15:22.288156       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-07909263cb03714b3 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:15:22.466775       1 cloud.go:588] Waiting for volume "vol-057288b0195698f7b" state: actual=detaching, desired=detached
I0501 00:15:22.525147       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-07909263cb03714b3 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
I0501 00:15:22.596151       1 cloud.go:588] Waiting for volume "vol-07b187ca552d58e9f" state: actual=detaching, desired=detached
W0501 00:15:23.411892       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0431c19b07f2b5910"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:15:24.338604       1 cloud.go:588] Waiting for volume "vol-057288b0195698f7b" state: actual=detaching, desired=detached
I0501 00:15:24.474095       1 cloud.go:588] Waiting for volume "vol-07b187ca552d58e9f" state: actual=detaching, desired=detached
W0501 00:15:27.115798       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0ab888a31985d3179"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:15:27.647924       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-057288b0195698f7b detached from node i-0bb146f105471fb17
I0501 00:15:27.665022       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-057288b0195698f7b NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:15:27.720604       1 cloud.go:462] DetachDisk called on non-attached volume: vol-057288b0195698f7b
I0501 00:15:27.817286       1 cloud.go:588] Waiting for volume "vol-07b187ca552d58e9f" state: actual=detaching, desired=detached
W0501 00:15:28.223502       1 cloud.go:529] Ignoring error from describe volume for volume "vol-02d8303e7637fce6d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:15:31.006708       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0aba1e65730b3ba25 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:15:31.009210       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-072665c0a3769c2eb NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:15:31.407911       1 cloud.go:588] Waiting for volume "vol-072665c0a3769c2eb" state: actual=detaching, desired=detached
I0501 00:15:31.555106       1 cloud.go:588] Waiting for volume "vol-0aba1e65730b3ba25" state: actual=detaching, desired=detached
I0501 00:15:32.489061       1 cloud.go:588] Waiting for volume "vol-072665c0a3769c2eb" state: actual=detaching, desired=detached
I0501 00:15:32.639005       1 cloud.go:588] Waiting for volume "vol-0aba1e65730b3ba25" state: actual=detaching, desired=detached
... skipping 78 lines ...
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-0dbe707a6330f285a"
}
I0501 00:15:56.783363       1 cloud.go:588] Waiting for volume "vol-0ded04262bf60cbdd" state: actual=attaching, desired=attached
I0501 00:15:56.841043       1 cloud.go:588] Waiting for volume "vol-0dbe707a6330f285a" state: actual=attaching, desired=attached
W0501 00:15:57.424514       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0431c19b07f2b5910"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:15:57.962544       1 cloud.go:588] Waiting for volume "vol-0dbe707a6330f285a" state: actual=attaching, desired=attached
W0501 00:15:58.155522       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0120610707d6f5163"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:15:58.540454       1 cloud.go:588] Waiting for volume "vol-0f9bf3857291cd9da" state: actual=attaching, desired=attached
I0501 00:15:58.672890       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0ded04262bf60cbdd
I0501 00:15:58.672917       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0ded04262bf60cbdd attached to node i-0bb146f105471fb17 through device /dev/xvdbb
I0501 00:15:58.768624       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0cdd3edf3a5146044
I0501 00:15:58.768652       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0cdd3edf3a5146044 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:15:58.774172       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0cdd3edf3a5146044 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 4 lines ...
I0501 00:16:01.893054       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-0f9bf3857291cd9da
I0501 00:16:01.893089       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0f9bf3857291cd9da attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:16:03.188213       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbe -> volume vol-0dbe707a6330f285a
I0501 00:16:03.188245       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0dbe707a6330f285a attached to node i-0bb146f105471fb17 through device /dev/xvdbe
I0501 00:16:03.427750       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0dbe707a6330f285a NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:16:03.777600       1 cloud.go:588] Waiting for volume "vol-0dbe707a6330f285a" state: actual=detaching, desired=detached
W0501 00:16:04.419131       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:16:04.858205       1 cloud.go:588] Waiting for volume "vol-0dbe707a6330f285a" state: actual=detaching, desired=detached
I0501 00:16:06.740270       1 cloud.go:588] Waiting for volume "vol-0dbe707a6330f285a" state: actual=detaching, desired=detached
I0501 00:16:10.059057       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0dbe707a6330f285a detached from node i-0bb146f105471fb17
I0501 00:16:11.053434       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0cdd3edf3a5146044 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:16:11.536777       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=attached, desired=detached
I0501 00:16:12.628255       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=detaching, desired=detached
... skipping 10 lines ...
  VolumeId: "vol-0cdd3edf3a5146044"
}
I0501 00:16:18.611393       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=attaching, desired=attached
I0501 00:16:19.734761       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=attaching, desired=attached
I0501 00:16:21.647357       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0cdd3edf3a5146044
I0501 00:16:21.647387       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0cdd3edf3a5146044 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
W0501 00:16:23.315253       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W0501 00:16:28.338205       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0ab888a31985d3179"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:16:31.115278       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0cdd3edf3a5146044 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:16:31.473641       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=attached, desired=detached
I0501 00:16:32.680594       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=detaching, desired=detached
I0501 00:16:34.570672       1 cloud.go:588] Waiting for volume "vol-0cdd3edf3a5146044" state: actual=detaching, desired=detached
I0501 00:16:37.867079       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0cdd3edf3a5146044 detached from node i-0bb146f105471fb17
I0501 00:16:41.076261       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0f9bf3857291cd9da NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 37 lines ...
  Device: "/dev/xvdbd",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-066bb609019b89c68"
}
I0501 00:16:53.698975       1 cloud.go:588] Waiting for volume "vol-066bb609019b89c68" state: actual=attaching, desired=attached
E0501 00:16:53.796938       1 manager.go:44] Error releasing device: release on device "/dev/xvdbb" assigned to different volume: "vol-0ded04262bf60cbdd" vs "vol-07909263cb03714b3"
I0501 00:16:53.796964       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0ded04262bf60cbdd detached from node i-0bb146f105471fb17
I0501 00:16:54.106280       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-07909263cb03714b3
I0501 00:16:54.106347       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-07909263cb03714b3 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
I0501 00:16:54.773389       1 cloud.go:588] Waiting for volume "vol-066bb609019b89c68" state: actual=attaching, desired=attached
I0501 00:16:56.678930       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-066bb609019b89c68
I0501 00:16:56.678965       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-066bb609019b89c68 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
W0501 00:16:57.327826       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W0501 00:16:58.647088       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0431c19b07f2b5910"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:17:01.108644       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0c552932006b6bd0c NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:17:01.695809       1 cloud.go:588] Waiting for volume "vol-0c552932006b6bd0c" state: actual=detaching, desired=detached
I0501 00:17:02.779976       1 cloud.go:588] Waiting for volume "vol-0c552932006b6bd0c" state: actual=detaching, desired=detached
I0501 00:17:04.656932       1 cloud.go:588] Waiting for volume "vol-0c552932006b6bd0c" state: actual=detaching, desired=detached
I0501 00:17:07.964334       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0c552932006b6bd0c detached from node i-0bb146f105471fb17
I0501 00:17:07.974790       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0c552932006b6bd0c NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 63 lines ...
I0501 00:17:41.540846       1 cloud.go:588] Waiting for volume "vol-048312e6ec91302fd" state: actual=detaching, desired=detached
I0501 00:17:41.541040       1 cloud.go:588] Waiting for volume "vol-09a50a2ccf8c89799" state: actual=detaching, desired=detached
I0501 00:17:41.542611       1 cloud.go:588] Waiting for volume "vol-0e286fc7052af6dc8" state: actual=detaching, desired=detached
I0501 00:17:42.620728       1 cloud.go:588] Waiting for volume "vol-09a50a2ccf8c89799" state: actual=detaching, desired=detached
I0501 00:17:42.643676       1 cloud.go:588] Waiting for volume "vol-0e286fc7052af6dc8" state: actual=detaching, desired=detached
I0501 00:17:42.746700       1 cloud.go:588] Waiting for volume "vol-048312e6ec91302fd" state: actual=detaching, desired=detached
W0501 00:17:44.385177       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:17:44.485793       1 cloud.go:588] Waiting for volume "vol-09a50a2ccf8c89799" state: actual=detaching, desired=detached
I0501 00:17:44.512941       1 cloud.go:588] Waiting for volume "vol-0e286fc7052af6dc8" state: actual=detaching, desired=detached
I0501 00:17:44.613644       1 cloud.go:588] Waiting for volume "vol-048312e6ec91302fd" state: actual=detaching, desired=detached
I0501 00:17:47.852334       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-09a50a2ccf8c89799 detached from node i-0bb146f105471fb17
I0501 00:17:47.871854       1 cloud.go:588] Waiting for volume "vol-0e286fc7052af6dc8" state: actual=detaching, desired=detached
I0501 00:17:47.882039       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-09a50a2ccf8c89799 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 36 lines ...
  VolumeId: "vol-08b3c8c57eb784667"
}
I0501 00:17:56.625546       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=attaching, desired=attached
I0501 00:17:57.705973       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=attaching, desired=attached
I0501 00:17:58.232630       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-05446de23d619ae00
I0501 00:17:58.232660       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-05446de23d619ae00 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
W0501 00:17:58.550153       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:17:59.605165       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-08b3c8c57eb784667
I0501 00:17:59.605192       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-08b3c8c57eb784667 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:17:59.610994       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-08b3c8c57eb784667 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:17:59.889847       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-08b3c8c57eb784667 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
W0501 00:18:03.281216       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
W0501 00:18:04.315660       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:18:15.081541       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0772f592c3b7b84f6 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:15.619566       1 cloud.go:415] [Debug] AttachVolume volume="vol-0772f592c3b7b84f6" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:18:15.607 +0000 UTC,
  Device: "/dev/xvdbd",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-0772f592c3b7b84f6"
}
I0501 00:18:15.780171       1 cloud.go:588] Waiting for volume "vol-0772f592c3b7b84f6" state: actual=attaching, desired=attached
I0501 00:18:16.849752       1 cloud.go:588] Waiting for volume "vol-0772f592c3b7b84f6" state: actual=attaching, desired=attached
W0501 00:18:18.538176       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0ab888a31985d3179"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:18:18.758787       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-0772f592c3b7b84f6
I0501 00:18:18.758814       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0772f592c3b7b84f6 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:18:18.769748       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0772f592c3b7b84f6 NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:19.031170       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0772f592c3b7b84f6 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:18:21.210067       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-08b3c8c57eb784667 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:21.220140       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-05446de23d619ae00 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:21.623266       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=detaching, desired=detached
I0501 00:18:21.651554       1 cloud.go:588] Waiting for volume "vol-05446de23d619ae00" state: actual=detaching, desired=detached
I0501 00:18:22.704660       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=detaching, desired=detached
I0501 00:18:22.736746       1 cloud.go:588] Waiting for volume "vol-05446de23d619ae00" state: actual=detaching, desired=detached
W0501 00:18:23.211835       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:18:24.592471       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=detaching, desired=detached
I0501 00:18:24.698168       1 cloud.go:588] Waiting for volume "vol-05446de23d619ae00" state: actual=detaching, desired=detached
I0501 00:18:27.916388       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-08b3c8c57eb784667 detached from node i-0bb146f105471fb17
I0501 00:18:27.930307       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-08b3c8c57eb784667 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:18:27.986347       1 cloud.go:462] DetachDisk called on non-attached volume: vol-08b3c8c57eb784667
I0501 00:18:28.057784       1 cloud.go:588] Waiting for volume "vol-05446de23d619ae00" state: actual=detaching, desired=detached
... skipping 9 lines ...
I0501 00:18:29.850230       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=attaching, desired=attached
I0501 00:18:31.772198       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-08b3c8c57eb784667
I0501 00:18:31.772232       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-08b3c8c57eb784667 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
I0501 00:18:33.955731       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-05446de23d619ae00 detached from node i-0bb146f105471fb17
I0501 00:18:33.969149       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-05446de23d619ae00 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:18:34.014255       1 cloud.go:462] DetachDisk called on non-attached volume: vol-05446de23d619ae00
W0501 00:18:37.293881       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:18:41.175488       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-08b3c8c57eb784667 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:41.184826       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0772f592c3b7b84f6 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:41.562964       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=detaching, desired=detached
I0501 00:18:41.657114       1 cloud.go:588] Waiting for volume "vol-0772f592c3b7b84f6" state: actual=detaching, desired=detached
I0501 00:18:42.654492       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=detaching, desired=detached
I0501 00:18:42.736930       1 cloud.go:588] Waiting for volume "vol-0772f592c3b7b84f6" state: actual=detaching, desired=detached
W0501 00:18:43.334739       1 cloud.go:529] Ignoring error from describe volume for volume "vol-083ee975c861f126d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:18:44.584480       1 cloud.go:588] Waiting for volume "vol-08b3c8c57eb784667" state: actual=detaching, desired=detached
I0501 00:18:44.669688       1 cloud.go:588] Waiting for volume "vol-0772f592c3b7b84f6" state: actual=detaching, desired=detached
W0501 00:18:46.583175       1 cloud.go:529] Ignoring error from describe volume for volume "vol-02d8303e7637fce6d"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:18:47.216281       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-01366979ca3875f10 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:47.740445       1 cloud.go:415] [Debug] AttachVolume volume="vol-01366979ca3875f10" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:18:47.728 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-01366979ca3875f10"
}
I0501 00:18:47.870265       1 cloud.go:588] Waiting for volume "vol-01366979ca3875f10" state: actual=attaching, desired=attached
I0501 00:18:47.894378       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-08b3c8c57eb784667 detached from node i-0bb146f105471fb17
I0501 00:18:47.994126       1 cloud.go:588] Waiting for volume "vol-0772f592c3b7b84f6" state: actual=detaching, desired=detached
W0501 00:18:48.847205       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0431c19b07f2b5910"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:18:48.948590       1 cloud.go:588] Waiting for volume "vol-01366979ca3875f10" state: actual=attaching, desired=attached
I0501 00:18:50.614041       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0e286fc7052af6dc8 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:18:50.865431       1 cloud.go:588] Waiting for volume "vol-01366979ca3875f10" state: actual=attaching, desired=attached
I0501 00:18:51.117036       1 cloud.go:415] [Debug] AttachVolume volume="vol-0e286fc7052af6dc8" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:18:51.105 +0000 UTC,
  Device: "/dev/xvdbc",
... skipping 5 lines ...
I0501 00:18:52.326820       1 cloud.go:588] Waiting for volume "vol-0e286fc7052af6dc8" state: actual=attaching, desired=attached
I0501 00:18:53.928933       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0772f592c3b7b84f6 detached from node i-0bb146f105471fb17
I0501 00:18:54.218691       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-01366979ca3875f10
I0501 00:18:54.218796       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-01366979ca3875f10 attached to node i-0bb146f105471fb17 through device /dev/xvdbb
I0501 00:18:54.219524       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0e286fc7052af6dc8
I0501 00:18:54.219735       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-0e286fc7052af6dc8 attached to node i-0bb146f105471fb17 through device /dev/xvdbc
W0501 00:18:57.224431       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:18:59.952389       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-08d4fb6f207e5b30a NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:19:00.430011       1 cloud.go:415] [Debug] AttachVolume volume="vol-08d4fb6f207e5b30a" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:19:00.421 +0000 UTC,
  Device: "/dev/xvdbd",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
... skipping 9 lines ...
  Device: "/dev/xvdbe",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-039866d220de5cbbe"
}
I0501 00:19:15.861233       1 cloud.go:588] Waiting for volume "vol-039866d220de5cbbe" state: actual=attaching, desired=attached
W0501 00:19:16.515224       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0120610707d6f5163"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:19:16.949227       1 cloud.go:588] Waiting for volume "vol-039866d220de5cbbe" state: actual=attaching, desired=attached
I0501 00:19:18.861565       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbe -> volume vol-039866d220de5cbbe
I0501 00:19:18.861595       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-039866d220de5cbbe attached to node i-0bb146f105471fb17 through device /dev/xvdbe
I0501 00:19:18.872295       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-039866d220de5cbbe NodeId:i-0bb146f105471fb17 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:19:19.113182       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-039866d220de5cbbe attached to node i-0bb146f105471fb17 through device /dev/xvdbe
I0501 00:19:21.322030       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0e286fc7052af6dc8 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 19 lines ...
I0501 00:19:33.996602       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-01366979ca3875f10 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:19:34.040410       1 cloud.go:462] DetachDisk called on non-attached volume: vol-01366979ca3875f10
I0501 00:19:34.905673       1 cloud.go:588] Waiting for volume "vol-039866d220de5cbbe" state: actual=detaching, desired=detached
I0501 00:19:36.329252       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0e286fc7052af6dc8 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0501 00:19:36.377562       1 cloud.go:462] DetachDisk called on non-attached volume: vol-0e286fc7052af6dc8
I0501 00:19:38.218396       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-039866d220de5cbbe detached from node i-0bb146f105471fb17
W0501 00:19:38.516205       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:19:41.304301       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-08d4fb6f207e5b30a NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:19:41.700290       1 cloud.go:588] Waiting for volume "vol-08d4fb6f207e5b30a" state: actual=detaching, desired=detached
I0501 00:19:42.783645       1 cloud.go:588] Waiting for volume "vol-08d4fb6f207e5b30a" state: actual=detaching, desired=detached
W0501 00:19:44.492157       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:19:44.670835       1 cloud.go:588] Waiting for volume "vol-08d4fb6f207e5b30a" state: actual=detaching, desired=detached
I0501 00:19:44.910370       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-01205156d388185d2 NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:19:45.358209       1 cloud.go:415] [Debug] AttachVolume volume="vol-01205156d388185d2" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:19:45.349 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-0bb146f105471fb17",
... skipping 11 lines ...
  AttachTime: 2021-05-01 00:19:48.703 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-0cc82de24b62188bf"
}
W0501 00:19:48.750200       1 cloud.go:529] Ignoring error from describe volume for volume "vol-07909263cb03714b3"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0501 00:19:48.796592       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-01205156d388185d2 attached to node i-0bb146f105471fb17 through device /dev/xvdba
I0501 00:19:48.818413       1 cloud.go:588] Waiting for volume "vol-0cc82de24b62188bf" state: actual=attaching, desired=attached
I0501 00:19:49.903968       1 cloud.go:588] Waiting for volume "vol-0cc82de24b62188bf" state: actual=attaching, desired=attached
I0501 00:19:50.850811       1 controller.go:286] ControllerPublishVolume: called with args {VolumeId:vol-0f377776b66f1b33f NodeId:i-0bb146f105471fb17 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:19:51.481744       1 cloud.go:415] [Debug] AttachVolume volume="vol-0f377776b66f1b33f" instance="i-0bb146f105471fb17" request returned {
  AttachTime: 2021-05-01 00:19:51.471 +0000 UTC,
... skipping 17 lines ...
  InstanceId: "i-0bb146f105471fb17",
  State: "attaching",
  VolumeId: "vol-03a15daf7f9949fe8"
}
I0501 00:19:57.083950       1 cloud.go:588] Waiting for volume "vol-03a15daf7f9949fe8" state: actual=attaching, desired=attached
I0501 00:19:58.178459       1 cloud.go:588] Waiting for volume "vol-03a15daf7f9949fe8" state: actual=attaching, desired=attached
W0501 00:19:58.446638       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:20:00.134910       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-03a15daf7f9949fe8
I0501 00:20:00.134941       1 controller.go:330] [Debug] ControllerPublishVolume: volume vol-03a15daf7f9949fe8 attached to node i-0bb146f105471fb17 through device /dev/xvdbd
I0501 00:20:01.329898       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-01205156d388185d2 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:20:01.348492       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-0cc82de24b62188bf NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:20:01.728151       1 cloud.go:588] Waiting for volume "vol-0cc82de24b62188bf" state: actual=detaching, desired=detached
I0501 00:20:01.877547       1 cloud.go:588] Waiting for volume "vol-01205156d388185d2" state: actual=detaching, desired=detached
I0501 00:20:02.807456       1 cloud.go:588] Waiting for volume "vol-0cc82de24b62188bf" state: actual=detaching, desired=detached
I0501 00:20:02.961355       1 cloud.go:588] Waiting for volume "vol-01205156d388185d2" state: actual=detaching, desired=detached
W0501 00:20:03.388175       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:20:04.719371       1 cloud.go:588] Waiting for volume "vol-0cc82de24b62188bf" state: actual=detaching, desired=detached
I0501 00:20:04.831160       1 cloud.go:588] Waiting for volume "vol-01205156d388185d2" state: actual=detaching, desired=detached
I0501 00:20:08.020997       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-0cc82de24b62188bf detached from node i-0bb146f105471fb17
I0501 00:20:08.190989       1 cloud.go:588] Waiting for volume "vol-01205156d388185d2" state: actual=detaching, desired=detached
I0501 00:20:14.095800       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-01205156d388185d2 detached from node i-0bb146f105471fb17
I0501 00:20:14.107822       1 controller.go:337] ControllerUnpublishVolume: called with args {VolumeId:vol-01205156d388185d2 NodeId:i-0bb146f105471fb17 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 81 lines ...
I0501 00:20:35.604115       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-03a15daf7f9949fe8" name:"snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866" 
I0501 00:20:35.703428       1 controller.go:503] Snapshot snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866 of volume vol-03a15daf7f9949fe8 already exists; nothing to do
I0501 00:20:36.204064       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-03a15daf7f9949fe8" name:"snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866" 
I0501 00:20:36.239998       1 controller.go:503] Snapshot snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866 of volume vol-03a15daf7f9949fe8 already exists; nothing to do
I0501 00:20:36.804484       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-03a15daf7f9949fe8" name:"snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866" 
I0501 00:20:36.838940       1 controller.go:503] Snapshot snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866 of volume vol-03a15daf7f9949fe8 already exists; nothing to do
W0501 00:20:37.400829       1 cloud.go:529] Ignoring error from describe volume for volume "vol-0e286fc7052af6dc8"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I0501 00:20:37.404268       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-03a15daf7f9949fe8" name:"snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866" 
I0501 00:20:37.438741       1 controller.go:503] Snapshot snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866 of volume vol-03a15daf7f9949fe8 already exists; nothing to do
I0501 00:20:38.010910       1 controller.go:484] CreateSnapshot: called with args source_volume_id:"vol-03a15daf7f9949fe8" name:"snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866" 
I0501 00:20:38.048660       1 controller.go:503] Snapshot snapshot-9f29dce7-55ad-459d-a166-7403ccc6b866 of volume vol-03a15daf7f9949fe8 already exists; nothing to do
I0501 00:20:38.110481       1 cloud.go:588] Waiting for volume "vol-01205156d388185d2" state: actual=detaching, desired=detached
I0501 00:20:38.113414       1 controller.go:354] [Debug] ControllerUnpublishVolume: volume vol-025a06fb07aeb7e90 detached from node i-0bb146f105471fb17
... skipping 30 lines ...
I0501 00:21:20.634598       1 controller.go:532] DeleteSnapshot: called with args snapshot_id:"snap-06c80aabb9ceedb29" 
###
## Printing pod ebs-csi-controller-5fd589c646-xs5nn ebs-plugin container logs
#
I0501 00:02:57.037862       1 driver.go:72] Driver: ebs.csi.aws.com Version: v1.0.0
I0501 00:02:57.037914       1 controller.go:80] [Debug] Retrieving region from metadata service
W0501 00:03:00.345890       1 metadata.go:188] Failed to parse the outpost arn: 
I0501 00:03:00.346210       1 driver.go:142] Listening for connections on address: &net.UnixAddr{Name:"/var/lib/csi/sockets/pluginproxy/csi.sock", Net:"unix"}
I0501 00:03:00.998865       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:02.787143       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:02.787895       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:04.814106       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:03:06.671593       1 controller.go:360] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 101 lines ...
I0501 00:12:55.703639       1 controller.go:414] ControllerExpandVolume: called with args {VolumeId:vol-0be85935ec20084ec CapacityRange:required_bytes:2147483648  Secrets:map[] VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:12:55.841788       1 cloud.go:1038] expanding volume "vol-0be85935ec20084ec" to size 2
I0501 00:13:03.647076       1 controller.go:268] DeleteVolume: called with args: {VolumeId:vol-0477d66cd72e288e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:04.826002       1 controller.go:101] CreateVolume: called with args {Name:pvc-2d089300-9a7d-4cf8-88ed-28b4fc22a74e CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:04.826867       1 controller.go:101] CreateVolume: called with args {Name:pvc-a9bf95a9-5527-441f-8e31-24112e35690c CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:06.406361       1 controller.go:101] CreateVolume: called with args {Name:pvc-613b8208-a573-495a-9fa8-b48e0985997f CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:snapshot:<snapshot_id:"snap-02efbdd67b6636688" >  AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0501 00:13:06.689364       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not resize volume "vol-0be85935ec20084ec": error describing modifications in volume "vol-0be85935ec20084ec": RequestCanceled: request context canceled
caused by: context deadline exceeded
I0501 00:13:08.221425       1 inflight.go:69] Node Service: volume="name:\"pvc-2d089300-9a7d-4cf8-88ed-28b4fc22a74e\" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > accessibility_requirements:<requisite:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"us-west-2a\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"us-west-2a\" > > preferred:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"us-west-2a\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"us-west-2a\" > > > " operation finished
I0501 00:13:08.280158       1 inflight.go:69] Node Service: volume="name:\"pvc-a9bf95a9-5527-441f-8e31-24112e35690c\" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > accessibility_requirements:<requisite:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"us-west-2a\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"us-west-2a\" > > preferred:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"us-west-2a\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"us-west-2a\" > > > " operation finished
I0501 00:13:09.998408       1 inflight.go:69] Node Service: volume="name:\"pvc-613b8208-a573-495a-9fa8-b48e0985997f\" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_content_source:<snapshot:<snapshot_id:\"snap-02efbdd67b6636688\" > > accessibility_requirements:<requisite:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"us-west-2a\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"us-west-2a\" > > preferred:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"us-west-2a\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"us-west-2a\" > > > " operation finished
I0501 00:13:30.799262       1 controller.go:268] DeleteVolume: called with args: {VolumeId:vol-0fd6715ad49ac9248 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:13:31.857284       1 controller.go:101] CreateVolume: called with args {Name:pvc-62af5356-50b6-4768-bacc-4ded8bf72966 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > segments:<key:"topology.kubernetes.io/zone" value:"us-west-2a" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 93 lines ...
I0501 00:21:20.436680       1 controller.go:268] DeleteVolume: called with args: {VolumeId:vol-03a15daf7f9949fe8 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
###
## Printing pod ebs-csi-node-859xr ebs-plugin container logs
#
I0501 00:02:56.911381       1 driver.go:72] Driver: ebs.csi.aws.com Version: v1.0.0
I0501 00:02:56.911446       1 node.go:85] [Debug] Retrieving node info from metadata service
W0501 00:02:57.072960       1 metadata.go:188] Failed to parse the outpost arn: 
I0501 00:02:57.073354       1 driver.go:142] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I0501 00:02:59.705294       1 node.go:514] NodeGetInfo: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:05:05.513297       1 node.go:498] NodeGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:05:05.521963       1 node.go:105] NodeStageVolume: called with args {VolumeId:vol-083ee975c861f126d PublishContext:map[devicePath:/dev/xvdbb] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8d095661-e31b-4efa-80a6-fa267046fff3/globalmount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1619827380999-8081-ebs.csi.aws.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0501 00:05:05.522059       1 node.go:179] NodeStageVolume: find device path /dev/xvdbb -> /dev/xvdbb
I0501 00:05:05.522420       1 node.go:213] NodeStageVolume: formatting /dev/xvdbb and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8d095661-e31b-4efa-80a6-fa267046fff3/globalmount with fstype ext4
... skipping 2333 lines ...
I0501 00:21:03.356581       1 inflight.go:69] Node Service: volume="vol-0219f457c1dc63798" operation finished
###
## Printing pod ebs-csi-node-kwntr ebs-plugin container logs
#
I0501 00:02:57.015640       1 driver.go:72] Driver: ebs.csi.aws.com Version: v1.0.0
I0501 00:02:57.015689       1 node.go:85] [Debug] Retrieving node info from metadata service
W0501 00:02:57.170917       1 metadata.go:188] Failed to parse the outpost arn: 
I0501 00:02:57.171237       1 driver.go:142] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I0501 00:02:59.026145       1 node.go:514] NodeGetInfo: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
###
## Printing pod ebs-csi-node-rmklq ebs-plugin container logs
#
I0501 00:02:56.986645       1 driver.go:72] Driver: ebs.csi.aws.com Version: v1.0.0
I0501 00:02:56.986692       1 node.go:85] [Debug] Retrieving node info from metadata service
W0501 00:02:57.108343       1 metadata.go:188] Failed to parse the outpost arn: 
I0501 00:02:57.108721       1 driver.go:142] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I0501 00:03:00.788577       1 node.go:514] NodeGetInfo: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
###
## Printing pod ebs-snapshot-controller-0 ebs-plugin container logs
#
error: container ebs-plugin is not valid for pod ebs-snapshot-controller-0
###
## Cleaning
#
###
## Removing driver
#
... skipping 539 lines ...

Deleted cluster: "test-cluster-17373.k8s.local"
###
## OVERALL_TEST_PASSED: 1
#
###
## FAIL!
#
make: *** [Makefile:104: test-e2e-external] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...