This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 66 succeeded
Started2022-09-22 11:02
Elapsed29m27s
Revisionmaster

No Test Failures!


Show 66 Passed Tests

Show 473 Skipped Tests

Error lines from build-log.txt

... skipping 195 lines ...
|        S    = O |
|         o    E =|
|        . o..* *.|
|       . *o=+.+. |
|        =++ o oo |
+----[SHA256]-----+
Error: cluster not found "test-cluster-16421.k8s.local"
###
## Creating cluster test-cluster-16421.k8s.local with /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.yaml (dry run)
#
I0922 11:04:08.350269    5737 create_cluster.go:843] Using SSH public key: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/id_rsa.pub
I0922 11:04:09.108383    5737 new_cluster.go:248] Inferred "aws" cloud provider from zone "us-west-2a"
I0922 11:04:09.108509    5737 new_cluster.go:1102]  Cloud Provider ID = aws
... skipping 76 lines ...
## Validating cluster test-cluster-16421.k8s.local
#
Using cluster from kubectl context: test-cluster-16421.k8s.local

Validating cluster test-cluster-16421.k8s.local

W0922 11:05:21.079378    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:05:31.127754    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:05:41.188512    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:05:51.243522    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:06:01.324563    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:06:11.370242    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:06:21.410840    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:06:31.471916    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:06:41.519306    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:06:51.575401    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:07:01.629929    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:07:11.955131    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:07:22.030668    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:07:32.079252    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:07:42.132045    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W0922 11:07:52.181962    5884 validate_cluster.go:184] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-16421-k8-me11gv-2077097677.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
nodes-us-west-2c	Node	c5.large	1	1	us-west-2c
... skipping 6 lines ...
KIND	NAME						MESSAGE
Machine	i-01cbacf93c4a2d3e2				machine "i-01cbacf93c4a2d3e2" has not yet joined cluster
Machine	i-0251f21195f10c1d8				machine "i-0251f21195f10c1d8" has not yet joined cluster
Machine	i-0c52567944a036864				machine "i-0c52567944a036864" has not yet joined cluster
Node	ip-172-20-60-128.us-west-2.compute.internal	node "ip-172-20-60-128.us-west-2.compute.internal" of role "master" is not ready

Validation Failed
W0922 11:08:05.226151    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 7 lines ...
KIND	NAME						MESSAGE
Machine	i-01cbacf93c4a2d3e2				machine "i-01cbacf93c4a2d3e2" has not yet joined cluster
Machine	i-0251f21195f10c1d8				machine "i-0251f21195f10c1d8" has not yet joined cluster
Machine	i-0c52567944a036864				machine "i-0c52567944a036864" has not yet joined cluster
Node	ip-172-20-60-128.us-west-2.compute.internal	node "ip-172-20-60-128.us-west-2.compute.internal" of role "master" is not ready

Validation Failed
W0922 11:08:17.369994    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 11 lines ...
Node	ip-172-20-60-128.us-west-2.compute.internal	node "ip-172-20-60-128.us-west-2.compute.internal" of role "master" is not ready
Pod	kube-system/coredns-6d467c5d59-859j6		system-cluster-critical pod "coredns-6d467c5d59-859j6" is pending
Pod	kube-system/coredns-autoscaler-5c7694cfcc-hdccf	system-cluster-critical pod "coredns-autoscaler-5c7694cfcc-hdccf" is pending
Pod	kube-system/dns-controller-7c956b559c-4l6x4	system-cluster-critical pod "dns-controller-7c956b559c-4l6x4" is pending
Pod	kube-system/kops-controller-tt9qg		system-cluster-critical pod "kops-controller-tt9qg" is pending

Validation Failed
W0922 11:08:29.431483    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 12 lines ...
Node	ip-172-20-60-128.us-west-2.compute.internal	master "ip-172-20-60-128.us-west-2.compute.internal" is missing kube-controller-manager pod
Node	ip-172-20-60-128.us-west-2.compute.internal	master "ip-172-20-60-128.us-west-2.compute.internal" is missing kube-scheduler pod
Pod	kube-system/coredns-6d467c5d59-859j6		system-cluster-critical pod "coredns-6d467c5d59-859j6" is pending
Pod	kube-system/coredns-autoscaler-5c7694cfcc-hdccf	system-cluster-critical pod "coredns-autoscaler-5c7694cfcc-hdccf" is pending
Pod	kube-system/dns-controller-7c956b559c-4l6x4	system-cluster-critical pod "dns-controller-7c956b559c-4l6x4" is pending

Validation Failed
W0922 11:08:41.687349    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 9 lines ...
Machine	i-0251f21195f10c1d8				machine "i-0251f21195f10c1d8" has not yet joined cluster
Machine	i-0c52567944a036864				machine "i-0c52567944a036864" has not yet joined cluster
Node	ip-172-20-60-128.us-west-2.compute.internal	master "ip-172-20-60-128.us-west-2.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-6d467c5d59-859j6		system-cluster-critical pod "coredns-6d467c5d59-859j6" is pending
Pod	kube-system/coredns-autoscaler-5c7694cfcc-hdccf	system-cluster-critical pod "coredns-autoscaler-5c7694cfcc-hdccf" is pending

Validation Failed
W0922 11:08:54.042176    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 9 lines ...
Machine	i-0251f21195f10c1d8				machine "i-0251f21195f10c1d8" has not yet joined cluster
Machine	i-0c52567944a036864				machine "i-0c52567944a036864" has not yet joined cluster
Node	ip-172-20-60-128.us-west-2.compute.internal	master "ip-172-20-60-128.us-west-2.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-6d467c5d59-859j6		system-cluster-critical pod "coredns-6d467c5d59-859j6" is pending
Pod	kube-system/coredns-autoscaler-5c7694cfcc-hdccf	system-cluster-critical pod "coredns-autoscaler-5c7694cfcc-hdccf" is pending

Validation Failed
W0922 11:09:06.333774    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 12 lines ...
Node	ip-172-20-43-83.us-west-2.compute.internal	node "ip-172-20-43-83.us-west-2.compute.internal" of role "node" is not ready
Node	ip-172-20-60-128.us-west-2.compute.internal	master "ip-172-20-60-128.us-west-2.compute.internal" is missing kube-controller-manager pod
Node	ip-172-20-75-29.us-west-2.compute.internal	node "ip-172-20-75-29.us-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-6d467c5d59-859j6		system-cluster-critical pod "coredns-6d467c5d59-859j6" is pending
Pod	kube-system/coredns-autoscaler-5c7694cfcc-hdccf	system-cluster-critical pod "coredns-autoscaler-5c7694cfcc-hdccf" is pending

Validation Failed
W0922 11:09:19.988930    5884 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 322 lines ...

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/multivolume.go:783
------------------------------
SSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology 
  should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/topology.go:192

[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:13:01.486: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename topology
Sep 22 11:13:02.106: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/topology.go:192
Sep 22 11:13:02.374: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c]
Sep 22 11:13:02.374: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a]
Sep 22 11:13:02.374: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:13:02.374: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-7272wf4b2      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2c]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-7272    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-7272wf4b2,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating sc
... skipping 13 lines ...
Sep 22 11:13:03.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-7272" for this suite.

‚ÄĘSSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volumemode.go:297

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:13:03.062: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volumemode.go:297
Sep 22 11:13:03.403: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:13:03.403: INFO: Creating resource for dynamic PV
Sep 22 11:13:03.403: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-5818gfpft
STEP: creating a claim
STEP: Creating pod
STEP: Waiting for the pod to fail
Sep 22 11:13:11.889: INFO: Deleting pod "pod-853303ca-7f58-41f3-8fb2-117f0a990e23" in namespace "volumemode-5818"
Sep 22 11:13:11.958: INFO: Wait up to 5m0s for pod "pod-853303ca-7f58-41f3-8fb2-117f0a990e23" to be fully deleted
STEP: Deleting pvc
Sep 22 11:13:20.223: INFO: Deleting PersistentVolumeClaim "aws7d2sf"
Sep 22 11:13:20.292: INFO: Waiting up to 5m0s for PersistentVolume pvc-088420ef-c580-4e54-a17e-23ca8f5920f2 to get deleted
Sep 22 11:13:20.357: INFO: PersistentVolume pvc-088420ef-c580-4e54-a17e-23ca8f5920f2 found and phase=Released (65.09056ms)
... skipping 9 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail to use a volume in a pod with mismatched mode [Slow]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volumemode.go:297
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
Sep 22 11:13:25.652: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 112 lines ...
Sep 22 11:13:02.220: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-62824fkpw
STEP: creating a claim
Sep 22 11:13:02.283: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-779z
STEP: Creating a pod to test subpath
Sep 22 11:13:02.477: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-779z" in namespace "provisioning-6282" to be "Succeeded or Failed"
Sep 22 11:13:02.542: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 64.504828ms
Sep 22 11:13:04.610: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133378681s
Sep 22 11:13:06.678: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201200389s
Sep 22 11:13:08.743: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265971704s
Sep 22 11:13:10.807: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329512704s
Sep 22 11:13:12.870: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.392899975s
... skipping 4 lines ...
Sep 22 11:13:23.193: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 20.715921288s
Sep 22 11:13:25.265: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.787420856s
Sep 22 11:13:27.327: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 24.85030333s
Sep 22 11:13:29.398: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Pending", Reason="", readiness=false. Elapsed: 26.920761794s
Sep 22 11:13:31.462: INFO: Pod "pod-subpath-test-dynamicpv-779z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.984617193s
STEP: Saw pod success
Sep 22 11:13:31.462: INFO: Pod "pod-subpath-test-dynamicpv-779z" satisfied condition "Succeeded or Failed"
Sep 22 11:13:31.523: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-779z container test-container-volume-dynamicpv-779z: <nil>
STEP: delete the pod
Sep 22 11:13:31.671: INFO: Waiting for pod pod-subpath-test-dynamicpv-779z to disappear
Sep 22 11:13:31.732: INFO: Pod pod-subpath-test-dynamicpv-779z no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-779z
Sep 22 11:13:31.732: INFO: Deleting pod "pod-subpath-test-dynamicpv-779z" in namespace "provisioning-6282"
... skipping 44 lines ...
Sep 22 11:13:03.523: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-255r6vg2
STEP: creating a claim
Sep 22 11:13:03.593: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-lnjb
STEP: Creating a pod to test subpath
Sep 22 11:13:03.803: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lnjb" in namespace "provisioning-255" to be "Succeeded or Failed"
Sep 22 11:13:03.874: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 70.699376ms
Sep 22 11:13:05.943: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138913021s
Sep 22 11:13:08.015: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211388606s
Sep 22 11:13:10.087: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283099723s
Sep 22 11:13:12.155: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351047478s
Sep 22 11:13:14.229: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.425010335s
... skipping 7 lines ...
Sep 22 11:13:30.833: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 27.029058068s
Sep 22 11:13:32.906: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.102274513s
Sep 22 11:13:34.979: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.175551653s
Sep 22 11:13:37.047: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.242944958s
Sep 22 11:13:39.115: INFO: Pod "pod-subpath-test-dynamicpv-lnjb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.311032467s
STEP: Saw pod success
Sep 22 11:13:39.115: INFO: Pod "pod-subpath-test-dynamicpv-lnjb" satisfied condition "Succeeded or Failed"
Sep 22 11:13:39.180: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-lnjb container test-container-subpath-dynamicpv-lnjb: <nil>
STEP: delete the pod
Sep 22 11:13:39.325: INFO: Waiting for pod pod-subpath-test-dynamicpv-lnjb to disappear
Sep 22 11:13:39.390: INFO: Pod pod-subpath-test-dynamicpv-lnjb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-lnjb
Sep 22 11:13:39.390: INFO: Deleting pod "pod-subpath-test-dynamicpv-lnjb" in namespace "provisioning-255"
... skipping 78 lines ...
Sep 22 11:13:26.514: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-7668p4t8z
STEP: creating a claim
Sep 22 11:13:26.580: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Sep 22 11:13:26.712: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep 22 11:13:26.844: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:28.975: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:30.982: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:32.974: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:34.979: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:36.975: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:38.976: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:40.977: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:42.976: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:44.980: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:46.986: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:48.977: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:50.975: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:52.975: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:54.976: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:56.977: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7668p4t8z",
  	... // 2 identical fields
  }

Sep 22 11:13:57.111: INFO: Error updating pvc awsxbspn: PersistentVolumeClaim "awsxbspn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 118 lines ...
Sep 22 11:13:57.797: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-13059wlcx
STEP: creating a claim
Sep 22 11:13:57.863: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-vvmg
STEP: Creating a pod to test subpath
Sep 22 11:13:58.068: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vvmg" in namespace "provisioning-1305" to be "Succeeded or Failed"
Sep 22 11:13:58.133: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 65.067865ms
Sep 22 11:14:00.201: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133214769s
Sep 22 11:14:02.305: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236782064s
Sep 22 11:14:04.379: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311428232s
Sep 22 11:14:06.445: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377479092s
Sep 22 11:14:08.511: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.443695894s
Sep 22 11:14:10.585: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.517688462s
Sep 22 11:14:12.654: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.5857316s
Sep 22 11:14:14.721: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.652832232s
STEP: Saw pod success
Sep 22 11:14:14.721: INFO: Pod "pod-subpath-test-dynamicpv-vvmg" satisfied condition "Succeeded or Failed"
Sep 22 11:14:14.787: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-vvmg container test-container-subpath-dynamicpv-vvmg: <nil>
STEP: delete the pod
Sep 22 11:14:14.932: INFO: Waiting for pod pod-subpath-test-dynamicpv-vvmg to disappear
Sep 22 11:14:14.996: INFO: Pod pod-subpath-test-dynamicpv-vvmg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vvmg
Sep 22 11:14:14.996: INFO: Deleting pod "pod-subpath-test-dynamicpv-vvmg" in namespace "provisioning-1305"
STEP: Creating pod pod-subpath-test-dynamicpv-vvmg
STEP: Creating a pod to test subpath
Sep 22 11:14:15.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vvmg" in namespace "provisioning-1305" to be "Succeeded or Failed"
Sep 22 11:14:15.194: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 65.76988ms
Sep 22 11:14:17.260: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130992968s
Sep 22 11:14:19.326: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197263456s
Sep 22 11:14:21.392: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263864071s
Sep 22 11:14:23.460: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331319393s
Sep 22 11:14:25.526: INFO: Pod "pod-subpath-test-dynamicpv-vvmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397754185s
STEP: Saw pod success
Sep 22 11:14:25.526: INFO: Pod "pod-subpath-test-dynamicpv-vvmg" satisfied condition "Succeeded or Failed"
Sep 22 11:14:25.591: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-vvmg container test-container-subpath-dynamicpv-vvmg: <nil>
STEP: delete the pod
Sep 22 11:14:25.731: INFO: Waiting for pod pod-subpath-test-dynamicpv-vvmg to disappear
Sep 22 11:14:25.796: INFO: Pod pod-subpath-test-dynamicpv-vvmg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vvmg
Sep 22 11:14:25.796: INFO: Deleting pod "pod-subpath-test-dynamicpv-vvmg" in namespace "provisioning-1305"
... skipping 79 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:256

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 41 lines ...
Sep 22 11:13:47.871: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-95885cmqw
STEP: creating a claim
Sep 22 11:13:47.933: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-q45m
STEP: Creating a pod to test subpath
Sep 22 11:13:48.128: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-q45m" in namespace "provisioning-9588" to be "Succeeded or Failed"
Sep 22 11:13:48.190: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 61.924638ms
Sep 22 11:13:50.254: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125818083s
Sep 22 11:13:52.317: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189259156s
Sep 22 11:13:54.382: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253848609s
Sep 22 11:13:56.445: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317139728s
Sep 22 11:13:58.508: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.379868396s
Sep 22 11:14:00.572: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.444561179s
Sep 22 11:14:02.638: INFO: Pod "pod-subpath-test-dynamicpv-q45m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.510155695s
STEP: Saw pod success
Sep 22 11:14:02.638: INFO: Pod "pod-subpath-test-dynamicpv-q45m" satisfied condition "Succeeded or Failed"
Sep 22 11:14:02.706: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-q45m container test-container-subpath-dynamicpv-q45m: <nil>
STEP: delete the pod
Sep 22 11:14:02.858: INFO: Waiting for pod pod-subpath-test-dynamicpv-q45m to disappear
Sep 22 11:14:02.944: INFO: Pod pod-subpath-test-dynamicpv-q45m no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-q45m
Sep 22 11:14:02.944: INFO: Deleting pod "pod-subpath-test-dynamicpv-q45m" in namespace "provisioning-9588"
... skipping 149 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:267

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 43 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:240

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/base.go:244
------------------------------
... skipping 391 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:278

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/base.go:244
------------------------------
... skipping 41 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:256

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/base.go:244
------------------------------
... skipping 645 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:278

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 88 lines ...

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volume_stress.go:89
------------------------------
SSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:278

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:15:52.281: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:278
Sep 22 11:15:52.596: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:15:52.596: INFO: Creating resource for dynamic PV
Sep 22 11:15:52.596: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9250vv4k8
STEP: creating a claim
Sep 22 11:15:52.658: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-h8t7
STEP: Checking for subpath error in container status
Sep 22 11:16:02.980: INFO: Deleting pod "pod-subpath-test-dynamicpv-h8t7" in namespace "provisioning-9250"
Sep 22 11:16:03.044: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-h8t7" to be fully deleted
STEP: Deleting pod
Sep 22 11:16:11.170: INFO: Deleting pod "pod-subpath-test-dynamicpv-h8t7" in namespace "provisioning-9250"
STEP: Deleting pvc
Sep 22 11:16:11.361: INFO: Deleting PersistentVolumeClaim "awsb2mh2"
... skipping 12 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:278
------------------------------
S
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:256

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:16:21.833: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:256
Sep 22 11:16:22.146: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:16:22.146: INFO: Creating resource for dynamic PV
Sep 22 11:16:22.146: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6853bz7z6
STEP: creating a claim
Sep 22 11:16:22.208: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-9wb8
STEP: Checking for subpath error in container status
Sep 22 11:16:34.523: INFO: Deleting pod "pod-subpath-test-dynamicpv-9wb8" in namespace "provisioning-6853"
Sep 22 11:16:34.595: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-9wb8" to be fully deleted
STEP: Deleting pod
Sep 22 11:16:40.722: INFO: Deleting pod "pod-subpath-test-dynamicpv-9wb8" in namespace "provisioning-6853"
STEP: Deleting pvc
Sep 22 11:16:40.908: INFO: Deleting PersistentVolumeClaim "aws6skr5"
... skipping 13 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:256
------------------------------
SSSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes 
  should store data
... skipping 63 lines ...
Sep 22 11:16:57.341: INFO: Pod aws-client still exists
Sep 22 11:16:59.274: INFO: Waiting for pod aws-client to disappear
Sep 22 11:16:59.338: INFO: Pod aws-client still exists
Sep 22 11:17:01.276: INFO: Waiting for pod aws-client to disappear
Sep 22 11:17:01.339: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 22 11:17:01.499: INFO: Couldn't delete PD "aws://us-west-2a/vol-0a9abb840a311d6c7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a9abb840a311d6c7 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: fb3e6de7-913b-47a7-9fe0-967f599c7f57
Sep 22 11:17:06.938: INFO: Successfully deleted PD "aws://us-west-2a/vol-0a9abb840a311d6c7".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:186
Sep 22 11:17:06.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4766" for this suite.
... skipping 239 lines ...
Sep 22 11:17:07.854: INFO: Creating resource for dynamic PV
Sep 22 11:17:07.854: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-61072stqp
STEP: creating a claim
STEP: Expanding non-expandable pvc
Sep 22 11:17:08.045: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep 22 11:17:08.172: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:10.305: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:12.299: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:14.302: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:16.301: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:18.301: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:20.301: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:22.300: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:24.303: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:26.309: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:28.304: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:30.303: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:32.307: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:34.300: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:36.305: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:38.301: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-61072stqp",
  	... // 2 identical fields
  }

Sep 22 11:17:38.428: INFO: Error updating pvc aws5crsc: PersistentVolumeClaim "aws5crsc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 178 lines ...
Sep 22 11:16:56.757: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-805187dqg
STEP: creating a claim
Sep 22 11:16:56.821: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-kclb
STEP: Creating a pod to test subpath
Sep 22 11:16:57.015: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kclb" in namespace "provisioning-8051" to be "Succeeded or Failed"
Sep 22 11:16:57.078: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 63.118427ms
Sep 22 11:16:59.142: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126560023s
Sep 22 11:17:01.205: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190137485s
Sep 22 11:17:03.269: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253302238s
Sep 22 11:17:05.337: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321845447s
Sep 22 11:17:07.401: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.386103986s
Sep 22 11:17:09.464: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.449039373s
Sep 22 11:17:11.528: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.513017258s
Sep 22 11:17:13.591: INFO: Pod "pod-subpath-test-dynamicpv-kclb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.576066874s
STEP: Saw pod success
Sep 22 11:17:13.591: INFO: Pod "pod-subpath-test-dynamicpv-kclb" satisfied condition "Succeeded or Failed"
Sep 22 11:17:13.654: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-kclb container test-container-subpath-dynamicpv-kclb: <nil>
STEP: delete the pod
Sep 22 11:17:13.792: INFO: Waiting for pod pod-subpath-test-dynamicpv-kclb to disappear
Sep 22 11:17:13.854: INFO: Pod pod-subpath-test-dynamicpv-kclb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kclb
Sep 22 11:17:13.854: INFO: Deleting pod "pod-subpath-test-dynamicpv-kclb" in namespace "provisioning-8051"
... skipping 102 lines ...
Sep 22 11:17:21.359: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-9642k65pj
STEP: creating a claim
Sep 22 11:17:21.427: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-pdlq
STEP: Creating a pod to test exec-volume-test
Sep 22 11:17:21.639: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-pdlq" in namespace "volume-9642" to be "Succeeded or Failed"
Sep 22 11:17:21.704: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 65.524994ms
Sep 22 11:17:23.772: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133048719s
Sep 22 11:17:25.839: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199923523s
Sep 22 11:17:27.905: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26662666s
Sep 22 11:17:29.972: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333519738s
Sep 22 11:17:32.040: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40131379s
Sep 22 11:17:34.107: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.467859657s
Sep 22 11:17:36.173: INFO: Pod "exec-volume-test-dynamicpv-pdlq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.534402195s
STEP: Saw pod success
Sep 22 11:17:36.173: INFO: Pod "exec-volume-test-dynamicpv-pdlq" satisfied condition "Succeeded or Failed"
Sep 22 11:17:36.239: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod exec-volume-test-dynamicpv-pdlq container exec-container-dynamicpv-pdlq: <nil>
STEP: delete the pod
Sep 22 11:17:36.395: INFO: Waiting for pod exec-volume-test-dynamicpv-pdlq to disappear
Sep 22 11:17:36.460: INFO: Pod exec-volume-test-dynamicpv-pdlq no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-pdlq
Sep 22 11:17:36.460: INFO: Deleting pod "exec-volume-test-dynamicpv-pdlq" in namespace "volume-9642"
... skipping 55 lines ...

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/multivolume.go:467
------------------------------
S
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:17:52.678: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:267
Sep 22 11:17:53.005: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:17:53.005: INFO: Creating resource for dynamic PV
Sep 22 11:17:53.005: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7173vx289
STEP: creating a claim
Sep 22 11:17:53.071: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-mlfg
STEP: Checking for subpath error in container status
Sep 22 11:18:05.411: INFO: Deleting pod "pod-subpath-test-dynamicpv-mlfg" in namespace "provisioning-7173"
Sep 22 11:18:05.488: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-mlfg" to be fully deleted
STEP: Deleting pod
Sep 22 11:18:09.623: INFO: Deleting pod "pod-subpath-test-dynamicpv-mlfg" in namespace "provisioning-7173"
STEP: Deleting pvc
Sep 22 11:18:09.820: INFO: Deleting PersistentVolumeClaim "awsq6qxv"
... skipping 12 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:267
------------------------------
SSSSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node
... skipping 232 lines ...
      should access to two volumes with the same volume mode and retain data across pod recreation on the same node
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/multivolume.go:135
------------------------------
SSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology 
  should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/topology.go:192

[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:18:48.966: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/topology.go:192
Sep 22 11:18:49.360: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c]
Sep 22 11:18:49.360: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a]
Sep 22 11:18:49.360: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:18:49.360: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-849fz88d      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-849    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-849fz88d,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating sc
... skipping 351 lines ...
Sep 22 11:18:52.077: INFO: Creating resource for dynamic PV
Sep 22 11:18:52.077: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-15214ll7j
STEP: creating a claim
Sep 22 11:18:52.140: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod to format volume volume-prep-provisioning-1521
Sep 22 11:18:52.335: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-1521" in namespace "provisioning-1521" to be "Succeeded or Failed"
Sep 22 11:18:52.397: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 62.173318ms
Sep 22 11:18:54.466: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130960884s
Sep 22 11:18:56.529: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193967145s
Sep 22 11:18:58.592: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256983591s
Sep 22 11:19:00.655: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319750303s
Sep 22 11:19:02.719: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 10.383605931s
Sep 22 11:19:04.784: INFO: Pod "volume-prep-provisioning-1521": Phase="Pending", Reason="", readiness=false. Elapsed: 12.448459317s
Sep 22 11:19:06.847: INFO: Pod "volume-prep-provisioning-1521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.512079492s
STEP: Saw pod success
Sep 22 11:19:06.847: INFO: Pod "volume-prep-provisioning-1521" satisfied condition "Succeeded or Failed"
Sep 22 11:19:06.847: INFO: Deleting pod "volume-prep-provisioning-1521" in namespace "provisioning-1521"
Sep 22 11:19:06.917: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-1521" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-rxsf
STEP: Checking for subpath error in container status
Sep 22 11:19:13.178: INFO: Deleting pod "pod-subpath-test-dynamicpv-rxsf" in namespace "provisioning-1521"
Sep 22 11:19:13.252: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-rxsf" to be fully deleted
STEP: Deleting pod
Sep 22 11:19:13.314: INFO: Deleting pod "pod-subpath-test-dynamicpv-rxsf" in namespace "provisioning-1521"
STEP: Deleting pvc
Sep 22 11:19:13.503: INFO: Deleting PersistentVolumeClaim "aws6fnq2"
... skipping 84 lines ...
Sep 22 11:18:50.436: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1562qbshh
STEP: creating a claim
Sep 22 11:18:50.505: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-vskh
STEP: Creating a pod to test atomic-volume-subpath
Sep 22 11:18:50.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vskh" in namespace "provisioning-1562" to be "Succeeded or Failed"
Sep 22 11:18:50.774: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Pending", Reason="", readiness=false. Elapsed: 65.513533ms
Sep 22 11:18:52.843: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134167819s
Sep 22 11:18:54.908: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199769754s
Sep 22 11:18:56.975: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265835066s
Sep 22 11:18:59.045: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336656955s
Sep 22 11:19:01.110: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.401798394s
... skipping 7 lines ...
Sep 22 11:19:17.646: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Running", Reason="", readiness=true. Elapsed: 26.93737537s
Sep 22 11:19:19.714: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Running", Reason="", readiness=true. Elapsed: 29.004848272s
Sep 22 11:19:21.799: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Running", Reason="", readiness=true. Elapsed: 31.09052398s
Sep 22 11:19:23.865: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Running", Reason="", readiness=true. Elapsed: 33.15673134s
Sep 22 11:19:25.931: INFO: Pod "pod-subpath-test-dynamicpv-vskh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.222039534s
STEP: Saw pod success
Sep 22 11:19:25.931: INFO: Pod "pod-subpath-test-dynamicpv-vskh" satisfied condition "Succeeded or Failed"
Sep 22 11:19:25.996: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-vskh container test-container-subpath-dynamicpv-vskh: <nil>
STEP: delete the pod
Sep 22 11:19:26.155: INFO: Waiting for pod pod-subpath-test-dynamicpv-vskh to disappear
Sep 22 11:19:26.219: INFO: Pod pod-subpath-test-dynamicpv-vskh no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vskh
Sep 22 11:19:26.219: INFO: Deleting pod "pod-subpath-test-dynamicpv-vskh" in namespace "provisioning-1562"
... skipping 127 lines ...
Sep 22 11:19:21.412: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-624vp54j      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-624    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-624vp54j,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-624    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-624vp54j,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-624    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-624vp54j,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-624vp54j    421a826b-bb1f-4b90-83b0-8ced066a2387 4683 0 2022-09-22 11:19:21 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e-kubernetes.test Update storage.k8s.io/v1 2022-09-22 11:19:21 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-nbrnn pvc- provisioning-624  fe171c71-435e-46bd-862a-f122ead3edb8 4685 0 2022-09-22 11:19:21 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e-kubernetes.test Update v1 2022-09-22 11:19:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}},"f:status":{"f:phase":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-624vp54j,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-05d9c0e4-8513-4c2a-9a6c-4ba247d3b1e9 in namespace provisioning-624
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Sep 22 11:19:36.138: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-f5glj" in namespace "provisioning-624" to be "Succeeded or Failed"
Sep 22 11:19:36.201: INFO: Pod "pvc-volume-tester-writer-f5glj": Phase="Pending", Reason="", readiness=false. Elapsed: 62.787916ms
Sep 22 11:19:38.265: INFO: Pod "pvc-volume-tester-writer-f5glj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.127386054s
STEP: Saw pod success
Sep 22 11:19:38.265: INFO: Pod "pvc-volume-tester-writer-f5glj" satisfied condition "Succeeded or Failed"
Sep 22 11:19:38.393: INFO: Pod pvc-volume-tester-writer-f5glj has the following logs: 
Sep 22 11:19:38.393: INFO: Deleting pod "pvc-volume-tester-writer-f5glj" in namespace "provisioning-624"
Sep 22 11:19:38.469: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-f5glj" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-43-83.us-west-2.compute.internal"
Sep 22 11:19:38.723: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-f2qsb" in namespace "provisioning-624" to be "Succeeded or Failed"
Sep 22 11:19:38.793: INFO: Pod "pvc-volume-tester-reader-f2qsb": Phase="Pending", Reason="", readiness=false. Elapsed: 69.742931ms
Sep 22 11:19:40.862: INFO: Pod "pvc-volume-tester-reader-f2qsb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.13849064s
STEP: Saw pod success
Sep 22 11:19:40.862: INFO: Pod "pvc-volume-tester-reader-f2qsb" satisfied condition "Succeeded or Failed"
Sep 22 11:19:40.997: INFO: Pod pvc-volume-tester-reader-f2qsb has the following logs: hello world

Sep 22 11:19:40.998: INFO: Deleting pod "pvc-volume-tester-reader-f2qsb" in namespace "provisioning-624"
Sep 22 11:19:41.068: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-f2qsb" to be fully deleted
Sep 22 11:19:41.131: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nbrnn] to have phase Bound
Sep 22 11:19:41.196: INFO: PersistentVolumeClaim pvc-nbrnn found and phase=Bound (64.363976ms)
... skipping 97 lines ...
Sep 22 11:19:29.365: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7780wp7mk
STEP: creating a claim
Sep 22 11:19:29.429: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rdl9
STEP: Creating a pod to test multi_subpath
Sep 22 11:19:29.639: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rdl9" in namespace "provisioning-7780" to be "Succeeded or Failed"
Sep 22 11:19:29.702: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 62.838067ms
Sep 22 11:19:31.765: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125778741s
Sep 22 11:19:33.828: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188852134s
Sep 22 11:19:35.893: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253480241s
Sep 22 11:19:37.957: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317767319s
Sep 22 11:19:40.022: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382620154s
Sep 22 11:19:42.086: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.44670742s
Sep 22 11:19:44.149: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.509654217s
Sep 22 11:19:46.215: INFO: Pod "pod-subpath-test-dynamicpv-rdl9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.575912136s
STEP: Saw pod success
Sep 22 11:19:46.215: INFO: Pod "pod-subpath-test-dynamicpv-rdl9" satisfied condition "Succeeded or Failed"
Sep 22 11:19:46.278: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-rdl9 container test-container-subpath-dynamicpv-rdl9: <nil>
STEP: delete the pod
Sep 22 11:19:46.416: INFO: Waiting for pod pod-subpath-test-dynamicpv-rdl9 to disappear
Sep 22 11:19:46.478: INFO: Pod pod-subpath-test-dynamicpv-rdl9 no longer exists
STEP: Deleting pod
Sep 22 11:19:46.478: INFO: Deleting pod "pod-subpath-test-dynamicpv-rdl9" in namespace "provisioning-7780"
... skipping 44 lines ...
Sep 22 11:20:05.286: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Sep 22 11:20:05.865: INFO: Successfully created a new PD: "aws://us-west-2a/vol-03d544aaa65e87232".
Sep 22 11:20:05.865: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-5snx
STEP: Creating a pod to test exec-volume-test
Sep 22 11:20:05.933: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-5snx" in namespace "volume-7340" to be "Succeeded or Failed"
Sep 22 11:20:05.998: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 64.587461ms
Sep 22 11:20:08.063: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129382302s
Sep 22 11:20:10.133: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199114475s
Sep 22 11:20:12.203: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269382594s
Sep 22 11:20:14.268: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334942891s
Sep 22 11:20:16.334: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.400932981s
Sep 22 11:20:18.399: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.465811921s
Sep 22 11:20:20.465: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.531588139s
Sep 22 11:20:22.531: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.597251224s
Sep 22 11:20:24.597: INFO: Pod "exec-volume-test-inlinevolume-5snx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.663461449s
STEP: Saw pod success
Sep 22 11:20:24.597: INFO: Pod "exec-volume-test-inlinevolume-5snx" satisfied condition "Succeeded or Failed"
Sep 22 11:20:24.661: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod exec-volume-test-inlinevolume-5snx container exec-container-inlinevolume-5snx: <nil>
STEP: delete the pod
Sep 22 11:20:24.798: INFO: Waiting for pod exec-volume-test-inlinevolume-5snx to disappear
Sep 22 11:20:24.862: INFO: Pod exec-volume-test-inlinevolume-5snx no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-5snx
Sep 22 11:20:24.862: INFO: Deleting pod "exec-volume-test-inlinevolume-5snx" in namespace "volume-7340"
Sep 22 11:20:25.063: INFO: Couldn't delete PD "aws://us-west-2a/vol-03d544aaa65e87232", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03d544aaa65e87232 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: 3252c158-6e00-4703-9e48-78a889ff0b16
Sep 22 11:20:30.441: INFO: Couldn't delete PD "aws://us-west-2a/vol-03d544aaa65e87232", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03d544aaa65e87232 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: 3f51391e-0821-4cdb-b494-52f93ffe2f46
Sep 22 11:20:35.861: INFO: Successfully deleted PD "aws://us-west-2a/vol-03d544aaa65e87232".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:186
Sep 22 11:20:35.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7340" for this suite.
... skipping 137 lines ...
      should access to two volumes with different volume mode and retain data across pod recreation on the same node
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/multivolume.go:206
------------------------------
SSSSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volumemode.go:297

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:20:36.000: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volumemode.go:297
Sep 22 11:20:36.359: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:20:36.359: INFO: Creating resource for dynamic PV
Sep 22 11:20:36.359: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-5102hmnxp
STEP: creating a claim
STEP: Creating pod
STEP: Waiting for the pod to fail
Sep 22 11:20:42.843: INFO: Deleting pod "pod-dc96a85e-0e5a-4e5c-ade0-9c7b4ab63aa5" in namespace "volumemode-5102"
Sep 22 11:20:42.908: INFO: Wait up to 5m0s for pod "pod-dc96a85e-0e5a-4e5c-ade0-9c7b4ab63aa5" to be fully deleted
STEP: Deleting pvc
Sep 22 11:20:51.168: INFO: Deleting PersistentVolumeClaim "aws2kfdx"
Sep 22 11:20:51.234: INFO: Waiting up to 5m0s for PersistentVolume pvc-34a34cec-ee24-4695-b689-83b78ec9a8bf to get deleted
Sep 22 11:20:51.299: INFO: PersistentVolume pvc-34a34cec-ee24-4695-b689-83b78ec9a8bf found and phase=Released (64.500838ms)
... skipping 10 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail to use a volume in a pod with mismatched mode [Slow]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volumemode.go:297
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
Sep 22 11:21:01.646: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 35 lines ...
Sep 22 11:20:22.816: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-484188l8s
STEP: creating a claim
Sep 22 11:20:22.880: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-bn6p
STEP: Creating a pod to test exec-volume-test
Sep 22 11:20:23.077: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-bn6p" in namespace "volume-4841" to be "Succeeded or Failed"
Sep 22 11:20:23.139: INFO: Pod "exec-volume-test-dynamicpv-bn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 62.170906ms
Sep 22 11:20:25.202: INFO: Pod "exec-volume-test-dynamicpv-bn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125311694s
Sep 22 11:20:27.265: INFO: Pod "exec-volume-test-dynamicpv-bn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188733737s
Sep 22 11:20:29.330: INFO: Pod "exec-volume-test-dynamicpv-bn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252988468s
Sep 22 11:20:31.395: INFO: Pod "exec-volume-test-dynamicpv-bn6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317852023s
Sep 22 11:20:33.462: INFO: Pod "exec-volume-test-dynamicpv-bn6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384883435s
STEP: Saw pod success
Sep 22 11:20:33.462: INFO: Pod "exec-volume-test-dynamicpv-bn6p" satisfied condition "Succeeded or Failed"
Sep 22 11:20:33.528: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod exec-volume-test-dynamicpv-bn6p container exec-container-dynamicpv-bn6p: <nil>
STEP: delete the pod
Sep 22 11:20:33.673: INFO: Waiting for pod exec-volume-test-dynamicpv-bn6p to disappear
Sep 22 11:20:33.736: INFO: Pod exec-volume-test-dynamicpv-bn6p no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-bn6p
Sep 22 11:20:33.736: INFO: Deleting pod "exec-volume-test-dynamicpv-bn6p" in namespace "volume-4841"
... skipping 235 lines ...
Sep 22 11:20:51.241: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Sep 22 11:20:51.767: INFO: Successfully created a new PD: "aws://us-west-2a/vol-0a6fb07f115863426".
Sep 22 11:20:51.767: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-97hw
STEP: Creating a pod to test exec-volume-test
Sep 22 11:20:51.838: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-97hw" in namespace "volume-910" to be "Succeeded or Failed"
Sep 22 11:20:51.906: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 67.566252ms
Sep 22 11:20:53.975: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136451004s
Sep 22 11:20:56.044: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205740848s
Sep 22 11:20:58.116: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278040361s
Sep 22 11:21:00.186: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347968226s
Sep 22 11:21:02.257: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.418549075s
Sep 22 11:21:04.328: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.489866268s
Sep 22 11:21:06.398: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.559072134s
Sep 22 11:21:08.466: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.627914837s
Sep 22 11:21:10.535: INFO: Pod "exec-volume-test-inlinevolume-97hw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.696045005s
STEP: Saw pod success
Sep 22 11:21:10.535: INFO: Pod "exec-volume-test-inlinevolume-97hw" satisfied condition "Succeeded or Failed"
Sep 22 11:21:10.603: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod exec-volume-test-inlinevolume-97hw container exec-container-inlinevolume-97hw: <nil>
STEP: delete the pod
Sep 22 11:21:10.748: INFO: Waiting for pod exec-volume-test-inlinevolume-97hw to disappear
Sep 22 11:21:10.816: INFO: Pod exec-volume-test-inlinevolume-97hw no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-97hw
Sep 22 11:21:10.816: INFO: Deleting pod "exec-volume-test-inlinevolume-97hw" in namespace "volume-910"
Sep 22 11:21:11.028: INFO: Couldn't delete PD "aws://us-west-2a/vol-0a6fb07f115863426", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a6fb07f115863426 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: 2310dbaa-31a8-421a-8a39-199528504572
Sep 22 11:21:16.405: INFO: Couldn't delete PD "aws://us-west-2a/vol-0a6fb07f115863426", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a6fb07f115863426 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: 11bb9359-3042-4f67-be3b-485c04854a79
Sep 22 11:21:21.827: INFO: Successfully deleted PD "aws://us-west-2a/vol-0a6fb07f115863426".
[AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:186
Sep 22 11:21:21.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-910" for this suite.
... skipping 26 lines ...
Sep 22 11:21:02.001: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Sep 22 11:21:02.360: INFO: Successfully created a new PD: "aws://us-west-2a/vol-0b1422912e4921ea5".
Sep 22 11:21:02.360: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-dkph
STEP: Creating a pod to test exec-volume-test
Sep 22 11:21:02.433: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-dkph" in namespace "volume-3148" to be "Succeeded or Failed"
Sep 22 11:21:02.498: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 64.61384ms
Sep 22 11:21:04.564: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130347307s
Sep 22 11:21:06.630: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19625373s
Sep 22 11:21:08.696: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262899356s
Sep 22 11:21:10.762: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328252038s
Sep 22 11:21:12.826: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 10.393046786s
Sep 22 11:21:14.891: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 12.457994454s
Sep 22 11:21:16.956: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 14.523056s
Sep 22 11:21:19.022: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Pending", Reason="", readiness=false. Elapsed: 16.588961405s
Sep 22 11:21:21.088: INFO: Pod "exec-volume-test-inlinevolume-dkph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.654391792s
STEP: Saw pod success
Sep 22 11:21:21.088: INFO: Pod "exec-volume-test-inlinevolume-dkph" satisfied condition "Succeeded or Failed"
Sep 22 11:21:21.152: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod exec-volume-test-inlinevolume-dkph container exec-container-inlinevolume-dkph: <nil>
STEP: delete the pod
Sep 22 11:21:21.289: INFO: Waiting for pod exec-volume-test-inlinevolume-dkph to disappear
Sep 22 11:21:21.353: INFO: Pod exec-volume-test-inlinevolume-dkph no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-dkph
Sep 22 11:21:21.353: INFO: Deleting pod "exec-volume-test-inlinevolume-dkph" in namespace "volume-3148"
Sep 22 11:21:21.568: INFO: Couldn't delete PD "aws://us-west-2a/vol-0b1422912e4921ea5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0b1422912e4921ea5 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: 7fd9baee-46f3-438b-a32c-572935aa8d11
Sep 22 11:21:26.997: INFO: Successfully deleted PD "aws://us-west-2a/vol-0b1422912e4921ea5".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:186
Sep 22 11:21:26.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3148" for this suite.
... skipping 20 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:267

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/base.go:244
------------------------------
... skipping 115 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:240

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 455 lines ...
Sep 22 11:23:06.877: INFO: Pod aws-client still exists
Sep 22 11:23:08.812: INFO: Waiting for pod aws-client to disappear
Sep 22 11:23:08.877: INFO: Pod aws-client still exists
Sep 22 11:23:10.812: INFO: Waiting for pod aws-client to disappear
Sep 22 11:23:10.877: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 22 11:23:11.019: INFO: Couldn't delete PD "aws://us-west-2a/vol-0ae18ce1443f60fd8", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ae18ce1443f60fd8 is currently attached to i-01cbacf93c4a2d3e2
	status code: 400, request id: c5687446-c8d7-41a0-a0f5-46b13d429e98
Sep 22 11:23:16.487: INFO: Successfully deleted PD "aws://us-west-2a/vol-0ae18ce1443f60fd8".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:186
Sep 22 11:23:16.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6441" for this suite.
... skipping 438 lines ...
Sep 22 11:23:32.551: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-6576gd48v
STEP: creating a claim
Sep 22 11:23:32.615: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-2jwr
STEP: Creating a pod to test exec-volume-test
Sep 22 11:23:32.827: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-2jwr" in namespace "volume-6576" to be "Succeeded or Failed"
Sep 22 11:23:32.891: INFO: Pod "exec-volume-test-dynamicpv-2jwr": Phase="Pending", Reason="", readiness=false. Elapsed: 64.091717ms
Sep 22 11:23:34.956: INFO: Pod "exec-volume-test-dynamicpv-2jwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128703828s
Sep 22 11:23:37.021: INFO: Pod "exec-volume-test-dynamicpv-2jwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193350474s
Sep 22 11:23:39.085: INFO: Pod "exec-volume-test-dynamicpv-2jwr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25803034s
Sep 22 11:23:41.149: INFO: Pod "exec-volume-test-dynamicpv-2jwr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321646308s
Sep 22 11:23:43.213: INFO: Pod "exec-volume-test-dynamicpv-2jwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.38616669s
STEP: Saw pod success
Sep 22 11:23:43.214: INFO: Pod "exec-volume-test-dynamicpv-2jwr" satisfied condition "Succeeded or Failed"
Sep 22 11:23:43.277: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod exec-volume-test-dynamicpv-2jwr container exec-container-dynamicpv-2jwr: <nil>
STEP: delete the pod
Sep 22 11:23:43.430: INFO: Waiting for pod exec-volume-test-dynamicpv-2jwr to disappear
Sep 22 11:23:43.494: INFO: Pod exec-volume-test-dynamicpv-2jwr no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-2jwr
Sep 22 11:23:43.494: INFO: Deleting pod "exec-volume-test-dynamicpv-2jwr" in namespace "volume-6576"
... skipping 497 lines ...
Sep 22 11:24:27.660: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-98386cf2z
STEP: creating a claim
Sep 22 11:24:27.728: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zz87
STEP: Creating a pod to test subpath
Sep 22 11:24:27.942: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zz87" in namespace "provisioning-9838" to be "Succeeded or Failed"
Sep 22 11:24:28.011: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 68.737668ms
Sep 22 11:24:30.080: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13791447s
Sep 22 11:24:32.150: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207999771s
Sep 22 11:24:34.220: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278156067s
Sep 22 11:24:36.290: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347945073s
Sep 22 11:24:38.361: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.418541723s
Sep 22 11:24:40.431: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 12.488997979s
Sep 22 11:24:42.502: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 14.559767477s
Sep 22 11:24:44.573: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 16.630894405s
Sep 22 11:24:46.643: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 18.700489049s
Sep 22 11:24:48.713: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Pending", Reason="", readiness=false. Elapsed: 20.770737385s
Sep 22 11:24:50.783: INFO: Pod "pod-subpath-test-dynamicpv-zz87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.840601856s
STEP: Saw pod success
Sep 22 11:24:50.783: INFO: Pod "pod-subpath-test-dynamicpv-zz87" satisfied condition "Succeeded or Failed"
Sep 22 11:24:50.852: INFO: Trying to get logs from node ip-172-20-43-83.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-zz87 container test-container-volume-dynamicpv-zz87: <nil>
STEP: delete the pod
Sep 22 11:24:51.012: INFO: Waiting for pod pod-subpath-test-dynamicpv-zz87 to disappear
Sep 22 11:24:51.080: INFO: Pod pod-subpath-test-dynamicpv-zz87 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zz87
Sep 22 11:24:51.080: INFO: Deleting pod "pod-subpath-test-dynamicpv-zz87" in namespace "provisioning-9838"
... skipping 450 lines ...

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:127
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:240

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 22 11:25:27.781: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-16421.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:240
Sep 22 11:25:28.126: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 22 11:25:28.126: INFO: Creating resource for dynamic PV
Sep 22 11:25:28.126: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7689sqfj7
STEP: creating a claim
Sep 22 11:25:28.196: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hs67
STEP: Checking for subpath error in container status
Sep 22 11:25:52.549: INFO: Deleting pod "pod-subpath-test-dynamicpv-hs67" in namespace "provisioning-7689"
Sep 22 11:25:52.626: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-hs67" to be fully deleted
STEP: Deleting pod
Sep 22 11:26:00.767: INFO: Deleting pod "pod-subpath-test-dynamicpv-hs67" in namespace "provisioning-7689"
STEP: Deleting pvc
Sep 22 11:26:00.988: INFO: Deleting PersistentVolumeClaim "aws56fh7"
... skipping 12 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly]
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/subpath.go:240
------------------------------
S
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand 
  should resize volume when PVC is edited while pod is using it
... skipping 770 lines ...
    /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /home/prow/go/pkg/mod/k8s.io/kubernetes@v1.22.11/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------

Ran 66 of 539 Specs in 950.592 seconds
SUCCESS! -- 66 Passed | 0 Failed | 0 Pending | 473 Skipped


Ginkgo ran 1 suite in 18m46.990356168s
Test Suite Passed
+ TEST_PASSED=0
+ set -e
... skipping 373 lines ...
I0922 11:16:00.132311       1 cloud.go:655] Waiting for volume "vol-0cbf24e50bb085168" state: actual=attached, desired=detached
I0922 11:16:01.202185       1 cloud.go:655] Waiting for volume "vol-0cbf24e50bb085168" state: actual=detaching, desired=detached
I0922 11:16:02.302843       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0ea93f93f332fe132 detached from node i-01cbacf93c4a2d3e2
I0922 11:16:03.067544       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0cbf24e50bb085168 detached from node i-01cbacf93c4a2d3e2
I0922 11:16:03.858887       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a9abb840a311d6c7 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:16:04.333448       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0a9abb840a311d6c7
E0922 11:16:04.333484       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0a9abb840a311d6c7" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0a9abb840a311d6c7" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0a9abb840a311d6c7 is not 'available'.
	status code: 400, request id: 24898c7a-3435-46c0-a560-a9f7879342bc
I0922 11:16:04.343958       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a9abb840a311d6c7 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:16:04.832401       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0a9abb840a311d6c7
E0922 11:16:04.832424       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0a9abb840a311d6c7" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0a9abb840a311d6c7" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0a9abb840a311d6c7 is not 'available'.
	status code: 400, request id: b52ec78f-5474-4029-967f-cae6d98f08e3
I0922 11:16:05.344102       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a9abb840a311d6c7 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:16:05.913420       1 cloud.go:482] [Debug] AttachVolume volume="vol-0a9abb840a311d6c7" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:16:05.893 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-01cbacf93c4a2d3e2",
... skipping 334 lines ...
  VolumeId: "vol-03b131afd56a0be48"
}
I0922 11:18:55.453914       1 cloud.go:655] Waiting for volume "vol-03b131afd56a0be48" state: actual=attaching, desired=attached
I0922 11:18:56.437470       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0fe1a590ee63256a4 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:18:56.515641       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-03b131afd56a0be48
I0922 11:18:56.515662       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-03b131afd56a0be48 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbb
E0922 11:18:56.613819       1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-0842063c21971e78a" vs "vol-0fe1a590ee63256a4"
I0922 11:18:56.613838       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0842063c21971e78a detached from node i-01cbacf93c4a2d3e2
I0922 11:18:57.002864       1 cloud.go:482] [Debug] AttachVolume volume="vol-0fe1a590ee63256a4" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:18:56.983 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cbacf93c4a2d3e2",
  State: "attaching",
... skipping 43 lines ...
  Device: "/dev/xvdba",
  InstanceId: "i-01cbacf93c4a2d3e2",
  State: "attaching",
  VolumeId: "vol-0139167ed4ab94cc7"
}
I0922 11:19:26.493514       1 cloud.go:655] Waiting for volume "vol-0139167ed4ab94cc7" state: actual=attaching, desired=attached
E0922 11:19:26.653188       1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-0fe1a590ee63256a4" vs "vol-0139167ed4ab94cc7"
I0922 11:19:26.653208       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0fe1a590ee63256a4 detached from node i-01cbacf93c4a2d3e2
I0922 11:19:26.668214       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0fe1a590ee63256a4 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:19:26.723788       1 cloud.go:529] DetachDisk called on non-attached volume: vol-0fe1a590ee63256a4
I0922 11:19:27.569043       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0139167ed4ab94cc7
I0922 11:19:27.569063       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0139167ed4ab94cc7 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdba
I0922 11:19:30.047679       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-03b131afd56a0be48 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 79 lines ...
I0922 11:19:59.734256       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0aeb804220b075ea1
I0922 11:19:59.734275       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0aeb804220b075ea1 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbc
I0922 11:20:01.839638       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-08987b7ec2030d1e1 detached from node i-01cbacf93c4a2d3e2
I0922 11:20:02.590817       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0383d71af2bfcc4af detached from node i-01cbacf93c4a2d3e2
I0922 11:20:05.933154       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-03d544aaa65e87232 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:06.414965       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-03d544aaa65e87232
E0922 11:20:06.415025       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-03d544aaa65e87232" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-03d544aaa65e87232" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-03d544aaa65e87232 is not 'available'.
	status code: 400, request id: ac6869d7-991b-432b-b592-3199b18fe1f8
I0922 11:20:06.428055       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-03d544aaa65e87232 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:06.948517       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-03d544aaa65e87232
E0922 11:20:06.948559       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-03d544aaa65e87232" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-03d544aaa65e87232" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-03d544aaa65e87232 is not 'available'.
	status code: 400, request id: 7a6c310f-460f-4038-a751-674875cd93bf
I0922 11:20:07.429065       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-03d544aaa65e87232 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:07.928496       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-03d544aaa65e87232
E0922 11:20:07.928535       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-03d544aaa65e87232" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-03d544aaa65e87232" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-03d544aaa65e87232 is not 'available'.
	status code: 400, request id: a443320e-e2b5-4b24-8695-d8fc19e705d1
I0922 11:20:11.935609       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-03d544aaa65e87232 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:12.517662       1 cloud.go:482] [Debug] AttachVolume volume="vol-03d544aaa65e87232" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:20:12.5 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cbacf93c4a2d3e2",
... skipping 96 lines ...
I0922 11:20:49.018129       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-039e3495cdb18775d
I0922 11:20:49.018148       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-039e3495cdb18775d attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbb
I0922 11:20:49.389838       1 cloud.go:655] Waiting for volume "vol-0593c9f1dea798a01" state: actual=detaching, desired=detached
I0922 11:20:51.268868       1 cloud.go:655] Waiting for volume "vol-0593c9f1dea798a01" state: actual=detaching, desired=detached
I0922 11:20:51.857577       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a6fb07f115863426 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:52.363451       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0a6fb07f115863426
E0922 11:20:52.363548       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0a6fb07f115863426" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0a6fb07f115863426" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0a6fb07f115863426 is not 'available'.
	status code: 400, request id: db81a28c-894b-4bce-a854-65a693303293
I0922 11:20:52.370290       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a6fb07f115863426 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:52.783779       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0c253fc910a9e01fd detached from node i-01cbacf93c4a2d3e2
I0922 11:20:52.887655       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0a6fb07f115863426
E0922 11:20:52.887679       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0a6fb07f115863426" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0a6fb07f115863426" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0a6fb07f115863426 is not 'available'.
	status code: 400, request id: 0fb6049c-a75f-4621-b222-6e3109c566ca
I0922 11:20:53.370676       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a6fb07f115863426 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:53.882198       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0a6fb07f115863426
E0922 11:20:53.882231       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0a6fb07f115863426" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0a6fb07f115863426" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0a6fb07f115863426 is not 'available'.
	status code: 400, request id: 6c42b054-ec4c-45dc-85b3-3ed2522b6eb0
I0922 11:20:54.598572       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0593c9f1dea798a01 detached from node i-01cbacf93c4a2d3e2
I0922 11:20:54.609864       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0593c9f1dea798a01 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:20:54.653400       1 cloud.go:529] DetachDisk called on non-attached volume: vol-0593c9f1dea798a01
I0922 11:20:57.891063       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0a6fb07f115863426 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:20:58.409791       1 cloud.go:482] [Debug] AttachVolume volume="vol-0a6fb07f115863426" instance="i-01cbacf93c4a2d3e2" request returned {
... skipping 5 lines ...
}
I0922 11:20:58.472603       1 cloud.go:655] Waiting for volume "vol-0a6fb07f115863426" state: actual=attaching, desired=attached
I0922 11:20:59.529885       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0a6fb07f115863426
I0922 11:20:59.529905       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0a6fb07f115863426 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdba
I0922 11:21:02.516492       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0b1422912e4921ea5 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:21:03.101237       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0b1422912e4921ea5
E0922 11:21:03.101269       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0b1422912e4921ea5" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0b1422912e4921ea5" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0b1422912e4921ea5 is not 'available'.
	status code: 400, request id: 0a8f5bf9-7622-482d-b406-367e2cc302f1
I0922 11:21:03.109319       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0b1422912e4921ea5 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:21:03.659030       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0b1422912e4921ea5
E0922 11:21:03.659204       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0b1422912e4921ea5" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0b1422912e4921ea5" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0b1422912e4921ea5 is not 'available'.
	status code: 400, request id: b207dfff-722c-4add-87d4-b3a3bf76b3c7
I0922 11:21:04.110270       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0b1422912e4921ea5 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:21:04.719180       1 cloud.go:482] [Debug] AttachVolume volume="vol-0b1422912e4921ea5" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:21:04.7 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-01cbacf93c4a2d3e2",
... skipping 167 lines ...
I0922 11:22:26.514007       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0fba060638e0d17c4
I0922 11:22:26.514026       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0fba060638e0d17c4 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbb
I0922 11:22:26.654409       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0bbc158bbacf44874
I0922 11:22:26.654429       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0bbc158bbacf44874 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbc
I0922 11:22:26.659378       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0bbc158bbacf44874 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:22:26.725208       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0ae18ce1443f60fd8
E0922 11:22:26.725232       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0ae18ce1443f60fd8" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0ae18ce1443f60fd8" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0ae18ce1443f60fd8 is not 'available'.
	status code: 400, request id: d2dc3bbe-3c15-4b24-aa7f-61a76c9f65ba
I0922 11:22:26.730928       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0ae18ce1443f60fd8 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:22:26.858245       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0bbc158bbacf44874 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbc
E0922 11:22:27.120119       1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-0b879938ed733dae7" vs "vol-0ae18ce1443f60fd8"
I0922 11:22:27.120138       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0b879938ed733dae7 detached from node i-01cbacf93c4a2d3e2
I0922 11:22:27.209472       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0ae18ce1443f60fd8
E0922 11:22:27.209546       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0ae18ce1443f60fd8" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-0ae18ce1443f60fd8" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-0ae18ce1443f60fd8 is not 'available'.
	status code: 400, request id: 72f3e422-84ab-4e66-9cb8-c50cfafa44d0
I0922 11:22:27.331729       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-074700a67752e8801 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:22:27.345321       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-065905be79c1cf5ed NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:22:27.731887       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-0ae18ce1443f60fd8 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:22:27.961545       1 cloud.go:482] [Debug] AttachVolume volume="vol-074700a67752e8801" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:22:27.94 +0000 UTC,
... skipping 98 lines ...
  State: "attaching",
  VolumeId: "vol-0bc89b9704e2b1a76"
}
I0922 11:23:21.882546       1 cloud.go:655] Waiting for volume "vol-0bc89b9704e2b1a76" state: actual=attaching, desired=attached
I0922 11:23:22.945918       1 cloud.go:655] Waiting for volume "vol-0bc89b9704e2b1a76" state: actual=attaching, desired=attached
I0922 11:23:23.016133       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0bbc158bbacf44874 detached from node i-01cbacf93c4a2d3e2
E0922 11:23:23.060084       1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-074700a67752e8801" vs "vol-0bc89b9704e2b1a76"
I0922 11:23:23.060101       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-074700a67752e8801 detached from node i-01cbacf93c4a2d3e2
I0922 11:23:23.069620       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-074700a67752e8801 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:23:23.097541       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0fba060638e0d17c4 detached from node i-01cbacf93c4a2d3e2
W0922 11:23:23.109638       1 cloud.go:529] DetachDisk called on non-attached volume: vol-074700a67752e8801
I0922 11:23:23.173052       1 cloud.go:655] Waiting for volume "vol-065905be79c1cf5ed" state: actual=detaching, desired=detached
I0922 11:23:24.814821       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0bc89b9704e2b1a76
... skipping 2 lines ...
W0922 11:23:26.553966       1 cloud.go:529] DetachDisk called on non-attached volume: vol-065905be79c1cf5ed
I0922 11:23:26.975653       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-065905be79c1cf5ed NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:23:27.019269       1 cloud.go:529] DetachDisk called on non-attached volume: vol-065905be79c1cf5ed
I0922 11:23:30.388182       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0f365ab2ca567b824 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:23:30.771718       1 cloud.go:655] Waiting for volume "vol-0f365ab2ca567b824" state: actual=detaching, desired=detached
I0922 11:23:31.837334       1 cloud.go:655] Waiting for volume "vol-0f365ab2ca567b824" state: actual=detaching, desired=detached
W0922 11:23:33.671751       1 cloud.go:596] Ignoring error from describe volume for volume "vol-065905be79c1cf5ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0922 11:23:33.728029       1 cloud.go:655] Waiting for volume "vol-0f365ab2ca567b824" state: actual=detaching, desired=detached
I0922 11:23:35.607800       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-061d3b63bfc55bdac NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:23:36.265992       1 cloud.go:482] [Debug] AttachVolume volume="vol-061d3b63bfc55bdac" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:23:36.246 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-01cbacf93c4a2d3e2",
... skipping 33 lines ...
I0922 11:23:47.180974       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-0f4a8f2dce55e0a06
I0922 11:23:47.180995       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0f4a8f2dce55e0a06 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbd
I0922 11:23:50.485484       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0c9cf9ba9292bf9cc NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:23:50.867859       1 cloud.go:655] Waiting for volume "vol-0c9cf9ba9292bf9cc" state: actual=detaching, desired=detached
I0922 11:23:51.991175       1 cloud.go:655] Waiting for volume "vol-0c9cf9ba9292bf9cc" state: actual=detaching, desired=detached
I0922 11:23:52.503596       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0f4a8f2dce55e0a06 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:23:52.568034       1 cloud.go:596] Ignoring error from describe volume for volume "vol-065905be79c1cf5ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0922 11:23:52.872178       1 cloud.go:655] Waiting for volume "vol-0f4a8f2dce55e0a06" state: actual=detaching, desired=detached
I0922 11:23:53.853122       1 cloud.go:655] Waiting for volume "vol-0c9cf9ba9292bf9cc" state: actual=detaching, desired=detached
I0922 11:23:53.941430       1 cloud.go:655] Waiting for volume "vol-0f4a8f2dce55e0a06" state: actual=detaching, desired=detached
I0922 11:23:55.807745       1 cloud.go:655] Waiting for volume "vol-0f4a8f2dce55e0a06" state: actual=detaching, desired=detached
I0922 11:23:57.152265       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0c9cf9ba9292bf9cc detached from node i-01cbacf93c4a2d3e2
I0922 11:23:59.114861       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0f4a8f2dce55e0a06 detached from node i-01cbacf93c4a2d3e2
... skipping 27 lines ...
I0922 11:24:11.544129       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-0f3669c9ce138a362 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbd
I0922 11:24:12.005966       1 cloud.go:655] Waiting for volume "vol-061d3b63bfc55bdac" state: actual=detaching, desired=detached
I0922 11:24:13.854328       1 cloud.go:655] Waiting for volume "vol-061d3b63bfc55bdac" state: actual=detaching, desired=detached
I0922 11:24:17.163306       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-061d3b63bfc55bdac detached from node i-01cbacf93c4a2d3e2
I0922 11:24:17.179574       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-061d3b63bfc55bdac NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:24:17.223298       1 cloud.go:529] DetachDisk called on non-attached volume: vol-061d3b63bfc55bdac
W0922 11:24:26.580477       1 cloud.go:596] Ignoring error from describe volume for volume "vol-065905be79c1cf5ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0922 11:24:31.953824       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-064d29d0b3b01cfe2 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:24:32.506214       1 cloud.go:482] [Debug] AttachVolume volume="vol-064d29d0b3b01cfe2" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:24:32.488 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-01cbacf93c4a2d3e2",
  State: "attaching",
... skipping 26 lines ...
I0922 11:25:07.319788       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-064d29d0b3b01cfe2 detached from node i-01cbacf93c4a2d3e2
I0922 11:25:20.587220       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0bc89b9704e2b1a76 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:25:20.976282       1 cloud.go:655] Waiting for volume "vol-0bc89b9704e2b1a76" state: actual=detaching, desired=detached
I0922 11:25:22.043867       1 cloud.go:655] Waiting for volume "vol-0bc89b9704e2b1a76" state: actual=detaching, desired=detached
I0922 11:25:23.909447       1 cloud.go:655] Waiting for volume "vol-0bc89b9704e2b1a76" state: actual=detaching, desired=detached
I0922 11:25:27.218611       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0bc89b9704e2b1a76 detached from node i-01cbacf93c4a2d3e2
W0922 11:25:27.802652       1 cloud.go:596] Ignoring error from describe volume for volume "vol-065905be79c1cf5ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0922 11:25:30.628152       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-00a182e4fb3c6bbb6 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:25:31.013930       1 cloud.go:655] Waiting for volume "vol-00a182e4fb3c6bbb6" state: actual=detaching, desired=detached
I0922 11:25:32.077799       1 cloud.go:655] Waiting for volume "vol-00a182e4fb3c6bbb6" state: actual=detaching, desired=detached
I0922 11:25:32.438680       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-05269fe0a1077fd1f NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:25:33.023722       1 cloud.go:482] [Debug] AttachVolume volume="vol-05269fe0a1077fd1f" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:25:33.006 +0000 UTC,
... skipping 11 lines ...
I0922 11:25:36.253411       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-05269fe0a1077fd1f attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdba
I0922 11:25:37.081639       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-06e6f1e0d57236258 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:25:37.256004       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-00a182e4fb3c6bbb6 detached from node i-01cbacf93c4a2d3e2
I0922 11:25:37.267380       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-00a182e4fb3c6bbb6 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:25:37.312248       1 cloud.go:529] DetachDisk called on non-attached volume: vol-00a182e4fb3c6bbb6
I0922 11:25:37.706662       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-06e6f1e0d57236258
E0922 11:25:37.706701       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-06e6f1e0d57236258" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-06e6f1e0d57236258" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-06e6f1e0d57236258 is not 'available'.
	status code: 400, request id: 8e65d167-5617-4066-9bb2-202248c5150a
I0922 11:25:37.712587       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-06e6f1e0d57236258 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:25:38.248619       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-06e6f1e0d57236258
E0922 11:25:38.248658       1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-06e6f1e0d57236258" to node "i-01cbacf93c4a2d3e2": could not attach volume "vol-06e6f1e0d57236258" to node "i-01cbacf93c4a2d3e2": IncorrectState: vol-06e6f1e0d57236258 is not 'available'.
	status code: 400, request id: fddcedae-e968-4683-8d3c-3d952a1cd9ba
I0922 11:25:38.713397       1 controller.go:316] ControllerPublishVolume: called with args {VolumeId:vol-06e6f1e0d57236258 NodeId:i-01cbacf93c4a2d3e2 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:25:39.274469       1 cloud.go:482] [Debug] AttachVolume volume="vol-06e6f1e0d57236258" instance="i-01cbacf93c4a2d3e2" request returned {
  AttachTime: 2022-09-22 11:25:39.258 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-01cbacf93c4a2d3e2",
... skipping 156 lines ...
  Device: "/dev/xvdbc",
  InstanceId: "i-01cbacf93c4a2d3e2",
  State: "attaching",
  VolumeId: "vol-07ca6783056489876"
}
I0922 11:27:07.246288       1 cloud.go:655] Waiting for volume "vol-07ca6783056489876" state: actual=attaching, desired=attached
E0922 11:27:07.348996       1 manager.go:44] Error releasing device: release on device "/dev/xvdbc" assigned to different volume: "vol-03714af30d25d37ce" vs "vol-07ca6783056489876"
I0922 11:27:07.349015       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-03714af30d25d37ce detached from node i-01cbacf93c4a2d3e2
I0922 11:27:08.314993       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-07ca6783056489876
I0922 11:27:08.315018       1 controller.go:344] [Debug] ControllerPublishVolume: volume vol-07ca6783056489876 attached to node i-01cbacf93c4a2d3e2 through device /dev/xvdbc
I0922 11:27:10.739691       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0249c75b1ae1d8483 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:27:11.142164       1 cloud.go:655] Waiting for volume "vol-0249c75b1ae1d8483" state: actual=detaching, desired=detached
I0922 11:27:12.205827       1 cloud.go:655] Waiting for volume "vol-0249c75b1ae1d8483" state: actual=detaching, desired=detached
I0922 11:27:14.071703       1 cloud.go:655] Waiting for volume "vol-0249c75b1ae1d8483" state: actual=detaching, desired=detached
I0922 11:27:17.375305       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0249c75b1ae1d8483 detached from node i-01cbacf93c4a2d3e2
I0922 11:27:17.923040       1 controller.go:407] ControllerGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:27:17.923532       1 controller.go:461] ControllerExpandVolume: called with args {VolumeId:vol-07ca6783056489876 CapacityRange:required_bytes:2147483648  Secrets:map[] VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W0922 11:27:18.002955       1 cloud.go:596] Ignoring error from describe volume for volume "vol-065905be79c1cf5ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I0922 11:27:20.770160       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-0f0481931714bfcab NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0922 11:27:21.139420       1 cloud.go:655] Waiting for volume "vol-0f0481931714bfcab" state: actual=detaching, desired=detached
I0922 11:27:22.204609       1 cloud.go:655] Waiting for volume "vol-0f0481931714bfcab" state: actual=detaching, desired=detached
I0922 11:27:24.055406       1 cloud.go:655] Waiting for volume "vol-0f0481931714bfcab" state: actual=detaching, desired=detached
I0922 11:27:27.360296       1 controller.go:389] [Debug] ControllerUnpublishVolume: volume vol-0f0481931714bfcab detached from node i-01cbacf93c4a2d3e2
I0922 11:28:40.817444       1 controller.go:375] ControllerUnpublishVolume: called with args {VolumeId:vol-07ca6783056489876 NodeId:i-01cbacf93c4a2d3e2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 4141 lines ...
## Removing driver
#
release "aws-ebs-csi-driver" uninstalled
###
## Deleting cluster test-cluster-16421.k8s.local
#
Error: error reading cluster configuration: error reading cluster configuration "test-cluster-16421.k8s.local": error reading s3://k8s-kops-csi-e2e/test-cluster-16421.k8s.local/config: error reading s3://k8s-kops-csi-e2e/test-cluster-16421.k8s.local/config: read tcp 10.60.61.94:49006->52.218.133.194:443: read: connection reset by peer
make: *** [Makefile:178: test-e2e-migration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...