Recent runs || View in Spyglass
PR | torredil: Update documentation |
Result | ABORTED |
Tests | 0 failed / 69 succeeded |
Started | |
Elapsed | 36m0s |
Revision | cc7a5d6e48bc6953bcda6cf1fdad223d38c0e8be |
Refs |
1263 |
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] verify volume status after node power off
EBS CSI Migration Suite [sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive] node unregister
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector] Selector-Label Volume Binding:vsphere [Feature:vsphere] should bind volume with claim for given label
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] persistentvolumereclaim:vsphere [Feature:vsphere] should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] persistentvolumereclaim:vsphere [Feature:vsphere] should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] persistentvolumereclaim:vsphere [Feature:vsphere] should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that a file written to the vsphere volume mount before kubelet restart can be read after restart [Disruptive]
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that a vsphere volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive]
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that deleting the Namespace of a PVC and Pod causes the successful detach of vsphere volume
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that deleting the PV before the pod does not cause pod deletion to fail on vsphere volume detach
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid capability name objectSpaceReserve is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid diskStripes value is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid hostFailuresToTolerate value is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with non-vsan datastore is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values and a VSAN datastore is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid hostFailuresToTolerate and cacheReservation values is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid objectSpaceReservation and iopsLimit values is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an existing and compatible SPBM policy is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an if a SPBM policy and VSAN capabilities cannot be honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify clean up of stale dummy VM for dynamically provisioned pvc using SPBM policy
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a SPBM policy is not honored on a non-compatible datastore for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a non-existing SPBM policy is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Verify Volume Attach Through vpxd Restart [Feature:vsphere][Serial][Disruptive] verify volume remains attached through vpxd restart
EBS CSI Migration Suite [sig-storage] Volume Attach Verify [Feature:vsphere][Serial][Disruptive] verify volume remains attached after master kubelet restart
EBS CSI Migration Suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass
EBS CSI Migration Suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - thin is honored for dynamically provisioned pv using storageclass
EBS CSI Migration Suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass
EBS CSI Migration Suite [sig-storage] Volume Disk Size [Feature:vsphere] verify dynamically provisioned pv has size rounded up correctly
EBS CSI Migration Suite [sig-storage] Volume FStype [Feature:vsphere] verify fstype - default value should be ext4
EBS CSI Migration Suite [sig-storage] Volume FStype [Feature:vsphere] verify fstype - ext3 formatted volume
EBS CSI Migration Suite [sig-storage] Volume FStype [Feature:vsphere] verify invalid fstype
EBS CSI Migration Suite [sig-storage] Volume Operations Storm [Feature:vsphere] should create pod with many volumes and verify no attach call fails
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with multiple volumes from different datastore
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with multiple volumes from same datastore
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with the same volume source attach/detach to different worker nodes
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with the same volume source on the same worker node
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] test back to back pod creation and deletion with different volume sources on the same worker node
EBS CSI Migration Suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with default parameter on clustered datastore
EBS CSI Migration Suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with spbm policy on clustered datastore
EBS CSI Migration Suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify static provisioning on clustered datastore
EBS CSI Migration Suite [sig-storage] Volume Provisioning on Datastore [Feature:vsphere] verify dynamically provisioned pv using storageclass fails on an invalid datastore
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if no zones are specified in the storage class (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if only datastore is specified in the storage class (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if only storage policy is specified in the storage class (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if the availability zone specified in the storage class have no shared datastores under it.
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with an invalid VSAN capability along with a compatible zone combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with compatible policy and datastore without any zones specified in the storage class fails (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible datastore and zone combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible storage policy along with compatible zone and datastore combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible storagePolicy and zone combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible zone along with compatible storagePolicy and datastore combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with invalid zone specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a PVC creation fails when multiple zones are specified in the storage class without shared datastores among the zones in waitForFirstConsumer binding mode
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod fails to get scheduled when conflicting volume topology (allowedTopologies) and pod scheduling constraints(nodeSelector) are specified
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode with allowedTopologies
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode with multiple allowedTopologies
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on a VSAN capability, datastore and compatible zone specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on allowed zones specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on multiple zones specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on multiple zones specified in the storage class. (No shared datastores exist among both zones)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and datastore specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and datastore specified in storage class when there are multiple datastores with the same name under different zones across datacenters
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and storage policy specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones specified in storage class when the datastore under the zone is present in another datacenter
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones, datastore and storage policy specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created on a non-Workspace zone and attached to a dynamically created PV, based on the allowed zones and storage policy specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify dynamically created pv with allowed zones specified in storage class, shows the right zone information on its labels
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify dynamically created pv with multiple zones specified in the storage class, shows both the zones on its labels
EBS CSI Migration Suite [sig-storage] vcp at scale [Feature:vsphere] vsphere scale tests
EBS CSI Migration Suite [sig-storage] vcp-performance [Feature:vsphere] vcp performance tests
EBS CSI Migration Suite [sig-storage] vsphere cloud provider stress [Feature:vsphere] vsphere stress tests
EBS CSI Migration Suite [sig-storage] vsphere statefulset [Feature:vsphere] vsphere statefulset testing
... skipping 506 lines ... #8 69.88 Installing : 7:device-mapper-libs-1.02.170-6.amzn2.5.x86_64 23/32 #8 70.44 Installing : cryptsetup-libs-1.7.4-4.amzn2.x86_64 24/32 #8 71.07 Installing : elfutils-libs-0.176-2.amzn2.x86_64 25/32 #8 71.88 Installing : systemd-libs-219-78.amzn2.0.16.x86_64 26/32 #8 72.44 Installing : 1:dbus-libs-1.10.24-7.amzn2.x86_64 27/32 #8 80.07 Installing : systemd-219-78.amzn2.0.16.x86_64 28/32 #8 81.40 Failed to get D-Bus connection: Operation not permitted #8 81.46 Installing : elfutils-default-yama-scope-0.176-2.amzn2.noarch 29/32 #8 82.31 Installing : 1:dbus-1.10.24-7.amzn2.x86_64 30/32 #8 83.37 Installing : e2fsprogs-1.42.9-19.amzn2.x86_64 31/32 #8 84.93 Installing : xfsprogs-4.5.0-18.amzn2.0.1.x86_64 32/32 #8 85.25 Verifying : libfdisk-2.30.2-2.amzn2.0.7.x86_64 1/32 #8 85.61 Verifying : lz4-1.7.5-2.amzn2.0.1.x86_64 2/32 ... skipping 232 lines ... ## Validating cluster test-cluster-24861.k8s.local # Using cluster from kubectl context: test-cluster-24861.k8s.local Validating cluster test-cluster-24861.k8s.local W0606 14:41:13.778044 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:41:23.839003 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:41:33.910665 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:41:43.974942 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:41:54.028845 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:42:04.078885 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:42:20.974909 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-24861-k8-1if9o7-658261299.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0606 14:42:42.804769 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0606 14:43:05.465491 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0606 14:43:27.270543 6267 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b nodes-us-west-2c Node c5.large 1 1 us-west-2c ... skipping 6 lines ... KIND NAME MESSAGE Machine i-072205068f0a554b1 machine "i-072205068f0a554b1" has not yet joined cluster Machine i-08e946b39887a82a2 machine "i-08e946b39887a82a2" has not yet joined cluster Machine i-0cd903533a771549c machine "i-0cd903533a771549c" has not yet joined cluster Node ip-172-20-41-140.us-west-2.compute.internal node "ip-172-20-41-140.us-west-2.compute.internal" of role "master" is not ready Validation Failed W0606 14:43:50.490859 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 13 lines ... Node ip-172-20-41-140.us-west-2.compute.internal master "ip-172-20-41-140.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-d649g system-cluster-critical pod "coredns-8f5559c9b-d649g" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-ldqdt system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ldqdt" is pending Pod kube-system/dns-controller-5d59c585d8-2tt9w system-cluster-critical pod "dns-controller-5d59c585d8-2tt9w" is pending Pod kube-system/kops-controller-r4mjg system-node-critical pod "kops-controller-r4mjg" is pending Validation Failed W0606 14:44:02.871162 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 14 lines ... Pod kube-system/coredns-8f5559c9b-d649g system-cluster-critical pod "coredns-8f5559c9b-d649g" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-ldqdt system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ldqdt" is pending Pod kube-system/dns-controller-5d59c585d8-2tt9w system-cluster-critical pod "dns-controller-5d59c585d8-2tt9w" is pending Pod kube-system/kops-controller-r4mjg system-node-critical pod "kops-controller-r4mjg" is pending Pod kube-system/kube-proxy-ip-172-20-41-140.us-west-2.compute.internal system-node-critical pod "kube-proxy-ip-172-20-41-140.us-west-2.compute.internal" is pending Validation Failed W0606 14:44:14.952655 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-41-140.us-west-2.compute.internal master "ip-172-20-41-140.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-41-140.us-west-2.compute.internal master "ip-172-20-41-140.us-west-2.compute.internal" is missing kube-controller-manager pod Node ip-172-20-41-140.us-west-2.compute.internal master "ip-172-20-41-140.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-d649g system-cluster-critical pod "coredns-8f5559c9b-d649g" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-ldqdt system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ldqdt" is pending Validation Failed W0606 14:44:27.267332 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 13 lines ... Node ip-172-20-41-140.us-west-2.compute.internal master "ip-172-20-41-140.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-d649g system-cluster-critical pod "coredns-8f5559c9b-d649g" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-ldqdt system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ldqdt" is pending Pod kube-system/etcd-manager-main-ip-172-20-41-140.us-west-2.compute.internal system-cluster-critical pod "etcd-manager-main-ip-172-20-41-140.us-west-2.compute.internal" is pending Pod kube-system/kube-controller-manager-ip-172-20-41-140.us-west-2.compute.internal system-cluster-critical pod "kube-controller-manager-ip-172-20-41-140.us-west-2.compute.internal" is pending Validation Failed W0606 14:44:39.647524 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 8 lines ... Machine i-072205068f0a554b1 machine "i-072205068f0a554b1" has not yet joined cluster Machine i-08e946b39887a82a2 machine "i-08e946b39887a82a2" has not yet joined cluster Machine i-0cd903533a771549c machine "i-0cd903533a771549c" has not yet joined cluster Pod kube-system/coredns-8f5559c9b-d649g system-cluster-critical pod "coredns-8f5559c9b-d649g" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-ldqdt system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ldqdt" is pending Validation Failed W0606 14:44:51.797770 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 9 lines ... VALIDATION ERRORS KIND NAME MESSAGE Node ip-172-20-71-6.us-west-2.compute.internal node "ip-172-20-71-6.us-west-2.compute.internal" of role "node" is not ready Pod kube-system/coredns-8f5559c9b-d649g system-cluster-critical pod "coredns-8f5559c9b-d649g" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-ldqdt system-cluster-critical pod "coredns-autoscaler-6f594f4c58-ldqdt" is pending Validation Failed W0606 14:45:04.081287 6267 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 251 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Pre-provisioned PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 14:53:01.563: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode Jun 6 14:53:02.122: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 6 14:53:02.254: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 6 14:53:02.855: INFO: Successfully created a new PD: "aws://us-west-2a/vol-03d6063a48f901730". Jun 6 14:53:02.855: INFO: Creating resource for pre-provisioned PV Jun 6 14:53:02.855: INFO: Creating PVC and PV ... skipping 5 lines ... Jun 6 14:53:07.396: INFO: PersistentVolumeClaim pvc-q7vtv found but phase is Pending instead of Bound. Jun 6 14:53:09.461: INFO: PersistentVolumeClaim pvc-q7vtv found but phase is Pending instead of Bound. Jun 6 14:53:11.525: INFO: PersistentVolumeClaim pvc-q7vtv found and phase=Bound (8.330452104s) Jun 6 14:53:11.525: INFO: Waiting up to 3m0s for PersistentVolume aws-spf9p to have phase Bound Jun 6 14:53:11.587: INFO: PersistentVolume aws-spf9p found and phase=Bound (62.002592ms) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 6 14:53:13.975: INFO: Deleting pod "pod-ba56e594-7ad3-4b48-b124-3ec8f6dfece9" in namespace "volumemode-6517" Jun 6 14:53:14.053: INFO: Wait up to 5m0s for pod "pod-ba56e594-7ad3-4b48-b124-3ec8f6dfece9" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Jun 6 14:53:22.180: INFO: Deleting PersistentVolumeClaim "pvc-q7vtv" Jun 6 14:53:22.246: INFO: Deleting PersistentVolume "aws-spf9p" Jun 6 14:53:22.510: INFO: Successfully deleted PD "aws://us-west-2a/vol-03d6063a48f901730". ... skipping 7 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 14:53:01.527: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode Jun 6 14:53:02.156: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 6 14:53:02.288: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 14:53:02.288: INFO: Creating resource for dynamic PV Jun 6 14:53:02.288: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-55542hvrw [1mSTEP[0m: creating a claim [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 6 14:53:12.874: INFO: Deleting pod "pod-456838d6-25c7-4c7a-8cae-e5af5e04920d" in namespace "volumemode-5554" Jun 6 14:53:12.942: INFO: Wait up to 5m0s for pod "pod-456838d6-25c7-4c7a-8cae-e5af5e04920d" to be fully deleted [1mSTEP[0m: Deleting pvc Jun 6 14:53:21.211: INFO: Deleting PersistentVolumeClaim "aws9q52w" Jun 6 14:53:21.279: INFO: Waiting up to 5m0s for PersistentVolume pvc-9a233408-a10c-4d90-b00d-08053ef6bf9e to get deleted Jun 6 14:53:21.346: INFO: PersistentVolume pvc-9a233408-a10c-4d90-b00d-08053ef6bf9e found and phase=Released (66.524472ms) ... skipping 9 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 ... skipping 36 lines ... Jun 6 14:53:02.275: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-2967pn8pt [1mSTEP[0m: creating a claim Jun 6 14:53:02.345: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-mcd9 [1mSTEP[0m: Creating a pod to test subpath Jun 6 14:53:02.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mcd9" in namespace "provisioning-2967" to be "Succeeded or Failed" Jun 6 14:53:02.703: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 68.02189ms Jun 6 14:53:04.768: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133648164s Jun 6 14:53:06.835: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200608985s Jun 6 14:53:08.903: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268588147s Jun 6 14:53:10.973: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338416051s Jun 6 14:53:13.039: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.404739333s Jun 6 14:53:15.105: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.470295386s Jun 6 14:53:17.171: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.535867705s Jun 6 14:53:19.237: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.602265558s Jun 6 14:53:21.303: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.668036317s Jun 6 14:53:23.430: INFO: Pod "pod-subpath-test-dynamicpv-mcd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.795688563s [1mSTEP[0m: Saw pod success Jun 6 14:53:23.506: INFO: Pod "pod-subpath-test-dynamicpv-mcd9" satisfied condition "Succeeded or Failed" Jun 6 14:53:23.573: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-mcd9 container test-container-volume-dynamicpv-mcd9: <nil> [1mSTEP[0m: delete the pod Jun 6 14:53:23.719: INFO: Waiting for pod pod-subpath-test-dynamicpv-mcd9 to disappear Jun 6 14:53:23.784: INFO: Pod pod-subpath-test-dynamicpv-mcd9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-mcd9 Jun 6 14:53:23.784: INFO: Deleting pod "pod-subpath-test-dynamicpv-mcd9" in namespace "provisioning-2967" ... skipping 72 lines ... Jun 6 14:53:27.053: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 6 14:53:27.655: INFO: Successfully created a new PD: "aws://us-west-2a/vol-03ee244160d31b028". Jun 6 14:53:27.655: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-d5p4 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 14:53:27.725: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-d5p4" in namespace "volume-6866" to be "Succeeded or Failed" Jun 6 14:53:27.794: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 69.527017ms Jun 6 14:53:29.868: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143435631s Jun 6 14:53:31.936: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210736214s Jun 6 14:53:34.010: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284696136s Jun 6 14:53:36.077: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351660101s Jun 6 14:53:38.149: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423817518s Jun 6 14:53:40.216: INFO: Pod "exec-volume-test-inlinevolume-d5p4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.490875329s [1mSTEP[0m: Saw pod success Jun 6 14:53:40.216: INFO: Pod "exec-volume-test-inlinevolume-d5p4" satisfied condition "Succeeded or Failed" Jun 6 14:53:40.287: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-inlinevolume-d5p4 container exec-container-inlinevolume-d5p4: <nil> [1mSTEP[0m: delete the pod Jun 6 14:53:40.428: INFO: Waiting for pod exec-volume-test-inlinevolume-d5p4 to disappear Jun 6 14:53:40.495: INFO: Pod exec-volume-test-inlinevolume-d5p4 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-d5p4 Jun 6 14:53:40.495: INFO: Deleting pod "exec-volume-test-inlinevolume-d5p4" in namespace "volume-6866" Jun 6 14:53:40.710: INFO: Couldn't delete PD "aws://us-west-2a/vol-03ee244160d31b028", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03ee244160d31b028 is currently attached to i-0cd903533a771549c status code: 400, request id: 7725e1d1-44bd-4795-9f68-12e20cef77c3 Jun 6 14:53:46.095: INFO: Couldn't delete PD "aws://us-west-2a/vol-03ee244160d31b028", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03ee244160d31b028 is currently attached to i-0cd903533a771549c status code: 400, request id: b559c3ec-54a3-46a6-bfa3-8374daff22b7 Jun 6 14:53:51.568: INFO: Successfully deleted PD "aws://us-west-2a/vol-03ee244160d31b028". [AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 14:53:51.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-6866" for this suite. ... skipping 53 lines ... Jun 6 14:53:23.027: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-2370hp2cg [1mSTEP[0m: creating a claim Jun 6 14:53:23.150: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-brww [1mSTEP[0m: Creating a pod to test subpath Jun 6 14:53:23.346: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-brww" in namespace "provisioning-2370" to be "Succeeded or Failed" Jun 6 14:53:23.410: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 63.201698ms Jun 6 14:53:25.472: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125898642s Jun 6 14:53:27.536: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189373962s Jun 6 14:53:29.600: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2532883s Jun 6 14:53:31.663: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31671582s Jun 6 14:53:33.727: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 10.380505696s Jun 6 14:53:35.793: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 12.446822714s Jun 6 14:53:37.871: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Pending", Reason="", readiness=false. Elapsed: 14.524866451s Jun 6 14:53:39.936: INFO: Pod "pod-subpath-test-dynamicpv-brww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.589080307s [1mSTEP[0m: Saw pod success Jun 6 14:53:39.936: INFO: Pod "pod-subpath-test-dynamicpv-brww" satisfied condition "Succeeded or Failed" Jun 6 14:53:40.006: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-brww container test-container-subpath-dynamicpv-brww: <nil> [1mSTEP[0m: delete the pod Jun 6 14:53:40.161: INFO: Waiting for pod pod-subpath-test-dynamicpv-brww to disappear Jun 6 14:53:40.223: INFO: Pod pod-subpath-test-dynamicpv-brww no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-brww Jun 6 14:53:40.223: INFO: Deleting pod "pod-subpath-test-dynamicpv-brww" in namespace "provisioning-2370" ... skipping 141 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDistro debian doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 112 lines ... [1mSTEP[0m: Deleting pod hostexec-ip-172-20-54-236.us-west-2.compute.internal-wshlk in namespace volumemode-6748 Jun 6 14:54:17.741: INFO: Deleting pod "pod-d7967fff-82ca-44e7-9036-a72277c9c212" in namespace "volumemode-6748" Jun 6 14:54:17.814: INFO: Wait up to 5m0s for pod "pod-d7967fff-82ca-44e7-9036-a72277c9c212" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Jun 6 14:54:19.949: INFO: Deleting PersistentVolumeClaim "pvc-rnm4m" Jun 6 14:54:20.032: INFO: Deleting PersistentVolume "aws-mpqpq" Jun 6 14:54:20.275: INFO: Couldn't delete PD "aws://us-west-2a/vol-010875c36c7e46660", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-010875c36c7e46660 is currently attached to i-0cd903533a771549c status code: 400, request id: b392a0c6-70a9-4f90-80cf-6d997b8affc9 Jun 6 14:54:25.774: INFO: Successfully deleted PD "aws://us-west-2a/vol-010875c36c7e46660". [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 14:54:25.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumemode-6748" for this suite. ... skipping 319 lines ... Jun 6 14:54:01.304: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-7936svkwf [1mSTEP[0m: creating a claim Jun 6 14:54:01.395: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-hb6r [1mSTEP[0m: Creating a pod to test multi_subpath Jun 6 14:54:01.714: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hb6r" in namespace "provisioning-7936" to be "Succeeded or Failed" Jun 6 14:54:01.895: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 181.401024ms Jun 6 14:54:03.966: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252229037s Jun 6 14:54:06.035: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320989806s Jun 6 14:54:08.107: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392670335s Jun 6 14:54:10.173: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45922668s Jun 6 14:54:12.241: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.527277961s Jun 6 14:54:14.308: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.594095482s Jun 6 14:54:16.376: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.66219087s Jun 6 14:54:18.441: INFO: Pod "pod-subpath-test-dynamicpv-hb6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.727320805s [1mSTEP[0m: Saw pod success Jun 6 14:54:18.441: INFO: Pod "pod-subpath-test-dynamicpv-hb6r" satisfied condition "Succeeded or Failed" Jun 6 14:54:18.506: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-hb6r container test-container-subpath-dynamicpv-hb6r: <nil> [1mSTEP[0m: delete the pod Jun 6 14:54:18.654: INFO: Waiting for pod pod-subpath-test-dynamicpv-hb6r to disappear Jun 6 14:54:18.722: INFO: Pod pod-subpath-test-dynamicpv-hb6r no longer exists [1mSTEP[0m: Deleting pod Jun 6 14:54:18.722: INFO: Deleting pod "pod-subpath-test-dynamicpv-hb6r" in namespace "provisioning-7936" ... skipping 25 lines ... should support creating multiple subpath from same volumes [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:294[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 14:54:42.183: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 6 14:54:42.522: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 14:54:42.522: INFO: Creating resource for dynamic PV Jun 6 14:54:42.522: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-46497ps2w [1mSTEP[0m: creating a claim [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 6 14:54:48.995: INFO: Deleting pod "pod-c7965b60-ae47-43de-a231-5201169b1cc9" in namespace "volumemode-4649" Jun 6 14:54:49.061: INFO: Wait up to 5m0s for pod "pod-c7965b60-ae47-43de-a231-5201169b1cc9" to be fully deleted [1mSTEP[0m: Deleting pvc Jun 6 14:55:01.343: INFO: Deleting PersistentVolumeClaim "aws6k5sw" Jun 6 14:55:01.417: INFO: Waiting up to 5m0s for PersistentVolume pvc-07fbc3e4-50a3-4a63-822e-391b14b7b967 to get deleted Jun 6 14:55:01.495: INFO: PersistentVolume pvc-07fbc3e4-50a3-4a63-822e-391b14b7b967 found and phase=Released (77.17147ms) ... skipping 9 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould not mount / map unused volumes in a pod [LinuxOnly][0m ... skipping 172 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDistro debian doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 108 lines ... Jun 6 14:54:26.239: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-48706jd68 [1mSTEP[0m: creating a claim Jun 6 14:54:26.303: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-rd7b [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 6 14:54:26.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rd7b" in namespace "provisioning-4870" to be "Succeeded or Failed" Jun 6 14:54:26.572: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.346319ms Jun 6 14:54:28.636: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129395679s Jun 6 14:54:30.701: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194037754s Jun 6 14:54:32.764: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257215137s Jun 6 14:54:34.833: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326795609s Jun 6 14:54:36.897: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Running", Reason="", readiness=true. Elapsed: 10.390229283s ... skipping 5 lines ... Jun 6 14:54:49.340: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Running", Reason="", readiness=true. Elapsed: 22.833119505s Jun 6 14:54:51.402: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Running", Reason="", readiness=true. Elapsed: 24.895550327s Jun 6 14:54:53.471: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Running", Reason="", readiness=true. Elapsed: 26.964077733s Jun 6 14:54:55.534: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Running", Reason="", readiness=true. Elapsed: 29.027320917s Jun 6 14:54:57.600: INFO: Pod "pod-subpath-test-dynamicpv-rd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.093639494s [1mSTEP[0m: Saw pod success Jun 6 14:54:57.600: INFO: Pod "pod-subpath-test-dynamicpv-rd7b" satisfied condition "Succeeded or Failed" Jun 6 14:54:57.663: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-rd7b container test-container-subpath-dynamicpv-rd7b: <nil> [1mSTEP[0m: delete the pod Jun 6 14:54:57.800: INFO: Waiting for pod pod-subpath-test-dynamicpv-rd7b to disappear Jun 6 14:54:57.862: INFO: Pod pod-subpath-test-dynamicpv-rd7b no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-rd7b Jun 6 14:54:57.862: INFO: Deleting pod "pod-subpath-test-dynamicpv-rd7b" in namespace "provisioning-4870" ... skipping 79 lines ... Jun 6 14:55:08.766: INFO: PersistentVolumeClaim pvc-mkx2h found but phase is Pending instead of Bound. Jun 6 14:55:10.833: INFO: PersistentVolumeClaim pvc-mkx2h found and phase=Bound (2.133755294s) Jun 6 14:55:10.833: INFO: Waiting up to 3m0s for PersistentVolume aws-2b2nh to have phase Bound Jun 6 14:55:10.901: INFO: PersistentVolume aws-2b2nh found and phase=Bound (68.018073ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-nrg9 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 14:55:11.141: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-nrg9" in namespace "volume-8563" to be "Succeeded or Failed" Jun 6 14:55:11.208: INFO: Pod "exec-volume-test-preprovisionedpv-nrg9": Phase="Pending", Reason="", readiness=false. Elapsed: 66.347629ms Jun 6 14:55:13.282: INFO: Pod "exec-volume-test-preprovisionedpv-nrg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141055492s Jun 6 14:55:15.351: INFO: Pod "exec-volume-test-preprovisionedpv-nrg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209531502s Jun 6 14:55:17.419: INFO: Pod "exec-volume-test-preprovisionedpv-nrg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.277695062s [1mSTEP[0m: Saw pod success Jun 6 14:55:17.419: INFO: Pod "exec-volume-test-preprovisionedpv-nrg9" satisfied condition "Succeeded or Failed" Jun 6 14:55:17.492: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-nrg9 container exec-container-preprovisionedpv-nrg9: <nil> [1mSTEP[0m: delete the pod Jun 6 14:55:17.636: INFO: Waiting for pod exec-volume-test-preprovisionedpv-nrg9 to disappear Jun 6 14:55:17.702: INFO: Pod exec-volume-test-preprovisionedpv-nrg9 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-nrg9 Jun 6 14:55:17.702: INFO: Deleting pod "exec-volume-test-preprovisionedpv-nrg9" in namespace "volume-8563" [1mSTEP[0m: Deleting pv and pvc Jun 6 14:55:17.770: INFO: Deleting PersistentVolumeClaim "pvc-mkx2h" Jun 6 14:55:17.837: INFO: Deleting PersistentVolume "aws-2b2nh" Jun 6 14:55:18.061: INFO: Couldn't delete PD "aws://us-west-2a/vol-003482eaf37084bac", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-003482eaf37084bac is currently attached to i-0cd903533a771549c status code: 400, request id: 4bc44fbe-dd68-4fb7-b321-a68103a87c08 Jun 6 14:55:23.489: INFO: Couldn't delete PD "aws://us-west-2a/vol-003482eaf37084bac", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-003482eaf37084bac is currently attached to i-0cd903533a771549c status code: 400, request id: 6b1c2007-4746-4290-bafb-c65c0a2a386f Jun 6 14:55:29.080: INFO: Successfully deleted PD "aws://us-west-2a/vol-003482eaf37084bac". [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 14:55:29.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-8563" for this suite. ... skipping 20 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 49 lines ... Jun 6 14:55:07.161: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-3394sghps [1mSTEP[0m: creating a claim Jun 6 14:55:07.228: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Expanding non-expandable pvc Jun 6 14:55:07.371: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 6 14:55:07.505: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:09.918: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:11.913: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:13.919: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:15.914: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:17.923: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:19.912: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:21.913: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:23.914: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:25.911: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:27.912: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:29.915: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:31.913: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:33.909: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:35.914: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:37.912: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-3394sghps", ... // 2 identical fields } Jun 6 14:55:38.055: INFO: Error updating pvc aws4lp8r: PersistentVolumeClaim "aws4lp8r" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 69 lines ... [36mDriver aws doesn't support ext3 -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 14:55:14.171: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278 Jun 6 14:55:14.486: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 14:55:14.486: INFO: Creating resource for dynamic PV Jun 6 14:55:14.487: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-6373llj8c [1mSTEP[0m: creating a claim Jun 6 14:55:14.558: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-msjw [1mSTEP[0m: Checking for subpath error in container status Jun 6 14:55:28.895: INFO: Deleting pod "pod-subpath-test-dynamicpv-msjw" in namespace "provisioning-6373" Jun 6 14:55:28.962: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-msjw" to be fully deleted [1mSTEP[0m: Deleting pod Jun 6 14:55:41.090: INFO: Deleting pod "pod-subpath-test-dynamicpv-msjw" in namespace "provisioning-6373" [1mSTEP[0m: Deleting pvc Jun 6 14:55:41.321: INFO: Deleting PersistentVolumeClaim "awsf9bn6" ... skipping 11 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 ... skipping 6 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDistro debian doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 131 lines ... Jun 6 14:55:38.792: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-5768dgnt5 [1mSTEP[0m: creating a claim Jun 6 14:55:38.865: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-mlxv [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 14:55:39.065: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-mlxv" in namespace "volume-5768" to be "Succeeded or Failed" Jun 6 14:55:39.129: INFO: Pod "exec-volume-test-dynamicpv-mlxv": Phase="Pending", Reason="", readiness=false. Elapsed: 64.90645ms Jun 6 14:55:41.195: INFO: Pod "exec-volume-test-dynamicpv-mlxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130904314s Jun 6 14:55:43.267: INFO: Pod "exec-volume-test-dynamicpv-mlxv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202446733s Jun 6 14:55:45.333: INFO: Pod "exec-volume-test-dynamicpv-mlxv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268738009s Jun 6 14:55:47.403: INFO: Pod "exec-volume-test-dynamicpv-mlxv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338063292s Jun 6 14:55:49.471: INFO: Pod "exec-volume-test-dynamicpv-mlxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.406065232s [1mSTEP[0m: Saw pod success Jun 6 14:55:49.471: INFO: Pod "exec-volume-test-dynamicpv-mlxv" satisfied condition "Succeeded or Failed" Jun 6 14:55:49.542: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-dynamicpv-mlxv container exec-container-dynamicpv-mlxv: <nil> [1mSTEP[0m: delete the pod Jun 6 14:55:49.757: INFO: Waiting for pod exec-volume-test-dynamicpv-mlxv to disappear Jun 6 14:55:49.826: INFO: Pod exec-volume-test-dynamicpv-mlxv no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-mlxv Jun 6 14:55:49.827: INFO: Deleting pod "exec-volume-test-dynamicpv-mlxv" in namespace "volume-5768" ... skipping 121 lines ... Jun 6 14:55:47.783: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-1349v2cbt [1mSTEP[0m: creating a claim Jun 6 14:55:47.851: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5cg8 [1mSTEP[0m: Creating a pod to test subpath Jun 6 14:55:48.062: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5cg8" in namespace "provisioning-1349" to be "Succeeded or Failed" Jun 6 14:55:48.135: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Pending", Reason="", readiness=false. Elapsed: 72.252028ms Jun 6 14:55:50.203: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14068696s Jun 6 14:55:52.275: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212326205s Jun 6 14:55:54.341: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278753326s Jun 6 14:55:56.408: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345388723s Jun 6 14:55:58.474: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.411899214s Jun 6 14:56:00.541: INFO: Pod "pod-subpath-test-dynamicpv-5cg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.478484072s [1mSTEP[0m: Saw pod success Jun 6 14:56:00.541: INFO: Pod "pod-subpath-test-dynamicpv-5cg8" satisfied condition "Succeeded or Failed" Jun 6 14:56:00.606: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-5cg8 container test-container-subpath-dynamicpv-5cg8: <nil> [1mSTEP[0m: delete the pod Jun 6 14:56:00.744: INFO: Waiting for pod pod-subpath-test-dynamicpv-5cg8 to disappear Jun 6 14:56:00.809: INFO: Pod pod-subpath-test-dynamicpv-5cg8 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-5cg8 Jun 6 14:56:00.809: INFO: Deleting pod "pod-subpath-test-dynamicpv-5cg8" in namespace "provisioning-1349" ... skipping 951 lines ... Jun 6 14:57:07.573: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-52049dn9t [1mSTEP[0m: creating a claim Jun 6 14:57:07.640: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-2k8j [1mSTEP[0m: Creating a pod to test subpath Jun 6 14:57:07.841: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2k8j" in namespace "provisioning-5204" to be "Succeeded or Failed" Jun 6 14:57:07.906: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Pending", Reason="", readiness=false. Elapsed: 65.574279ms Jun 6 14:57:09.974: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133794655s Jun 6 14:57:12.042: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201386297s Jun 6 14:57:14.108: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267424595s Jun 6 14:57:16.178: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.337297432s Jun 6 14:57:18.248: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.407640376s Jun 6 14:57:20.316: INFO: Pod "pod-subpath-test-dynamicpv-2k8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.474977767s [1mSTEP[0m: Saw pod success Jun 6 14:57:20.316: INFO: Pod "pod-subpath-test-dynamicpv-2k8j" satisfied condition "Succeeded or Failed" Jun 6 14:57:20.382: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-2k8j container test-container-subpath-dynamicpv-2k8j: <nil> [1mSTEP[0m: delete the pod Jun 6 14:57:20.533: INFO: Waiting for pod pod-subpath-test-dynamicpv-2k8j to disappear Jun 6 14:57:20.601: INFO: Pod pod-subpath-test-dynamicpv-2k8j no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-2k8j Jun 6 14:57:20.602: INFO: Deleting pod "pod-subpath-test-dynamicpv-2k8j" in namespace "provisioning-5204" ... skipping 267 lines ... Jun 6 14:57:36.825: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-870gb8xh [1mSTEP[0m: creating a claim Jun 6 14:57:36.891: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-6bgt [1mSTEP[0m: Creating a pod to test subpath Jun 6 14:57:37.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6bgt" in namespace "provisioning-870" to be "Succeeded or Failed" Jun 6 14:57:37.160: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Pending", Reason="", readiness=false. Elapsed: 65.484139ms Jun 6 14:57:39.226: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132008537s Jun 6 14:57:41.303: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208480986s Jun 6 14:57:43.370: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275608945s Jun 6 14:57:45.437: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343142199s Jun 6 14:57:47.505: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.41080554s Jun 6 14:57:49.574: INFO: Pod "pod-subpath-test-dynamicpv-6bgt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.479342447s [1mSTEP[0m: Saw pod success Jun 6 14:57:49.574: INFO: Pod "pod-subpath-test-dynamicpv-6bgt" satisfied condition "Succeeded or Failed" Jun 6 14:57:49.640: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-6bgt container test-container-volume-dynamicpv-6bgt: <nil> [1mSTEP[0m: delete the pod Jun 6 14:57:49.785: INFO: Waiting for pod pod-subpath-test-dynamicpv-6bgt to disappear Jun 6 14:57:49.850: INFO: Pod pod-subpath-test-dynamicpv-6bgt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-6bgt Jun 6 14:57:49.850: INFO: Deleting pod "pod-subpath-test-dynamicpv-6bgt" in namespace "provisioning-870" ... skipping 173 lines ... Jun 6 14:58:21.802: INFO: Pod aws-client still exists Jun 6 14:58:23.802: INFO: Waiting for pod aws-client to disappear Jun 6 14:58:23.868: INFO: Pod aws-client still exists Jun 6 14:58:25.803: INFO: Waiting for pod aws-client to disappear Jun 6 14:58:25.872: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 6 14:58:26.023: INFO: Couldn't delete PD "aws://us-west-2a/vol-02bc806ba40e45835", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02bc806ba40e45835 is currently attached to i-0cd903533a771549c status code: 400, request id: f3164c80-5327-4849-bb0c-ae4067cc248e Jun 6 14:58:31.450: INFO: Successfully deleted PD "aws://us-west-2a/vol-02bc806ba40e45835". [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 14:58:31.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-5635" for this suite. ... skipping 159 lines ... should concurrently access the single read-only volume from pods on the same node [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:337[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 14:58:38.252: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 6 14:58:38.579: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 6 14:58:39.135: INFO: Successfully created a new PD: "aws://us-west-2a/vol-072c4047cb0879c47". Jun 6 14:58:39.135: INFO: Creating resource for pre-provisioned PV Jun 6 14:58:39.135: INFO: Creating PVC and PV ... skipping 2 lines ... Jun 6 14:58:39.342: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jx4jx] to have phase Bound Jun 6 14:58:39.404: INFO: PersistentVolumeClaim pvc-jx4jx found but phase is Pending instead of Bound. Jun 6 14:58:41.469: INFO: PersistentVolumeClaim pvc-jx4jx found and phase=Bound (2.127026609s) Jun 6 14:58:41.469: INFO: Waiting up to 3m0s for PersistentVolume aws-grbt2 to have phase Bound Jun 6 14:58:41.535: INFO: PersistentVolume aws-grbt2 found and phase=Bound (65.797206ms) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 6 14:58:41.868: INFO: Deleting pod "pod-a1cdd116-0c9a-482c-84af-a740d0fee082" in namespace "volumemode-5136" Jun 6 14:58:41.935: INFO: Wait up to 5m0s for pod "pod-a1cdd116-0c9a-482c-84af-a740d0fee082" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Jun 6 14:58:52.061: INFO: Deleting PersistentVolumeClaim "pvc-jx4jx" Jun 6 14:58:52.127: INFO: Deleting PersistentVolume "aws-grbt2" Jun 6 14:58:52.403: INFO: Successfully deleted PD "aws://us-west-2a/vol-072c4047cb0879c47". ... skipping 7 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould concurrently access the single read-only volume from pods on the same node[0m ... skipping 136 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 8 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 168 lines ... Jun 6 14:59:08.117: INFO: Creating resource for dynamic PV Jun 6 14:59:08.117: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-4396t6kdk [1mSTEP[0m: creating a claim [1mSTEP[0m: Expanding non-expandable pvc Jun 6 14:59:08.323: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 6 14:59:08.467: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:10.608: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:12.602: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:14.607: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:16.607: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:18.607: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:20.605: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:22.603: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:24.606: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:26.603: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:28.602: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:30.610: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:32.603: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:34.603: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:36.608: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:38.605: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-4396t6kdk", ... // 2 identical fields } Jun 6 14:59:38.739: INFO: Error updating pvc awskzc55: PersistentVolumeClaim "awskzc55" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 102 lines ... Jun 6 14:59:43.230: INFO: Waiting for pod aws-client to disappear Jun 6 14:59:43.293: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws [1mSTEP[0m: Deleting pv and pvc Jun 6 14:59:43.293: INFO: Deleting PersistentVolumeClaim "pvc-mfcdh" Jun 6 14:59:43.359: INFO: Deleting PersistentVolume "aws-7gtwj" Jun 6 14:59:43.625: INFO: Couldn't delete PD "aws://us-west-2a/vol-04140f2bcf8324d00", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04140f2bcf8324d00 is currently attached to i-0cd903533a771549c status code: 400, request id: f2653d7c-484b-4f82-bb39-c5911363d0e1 Jun 6 14:59:49.041: INFO: Successfully deleted PD "aws://us-west-2a/vol-04140f2bcf8324d00". [AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 14:59:49.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-4339" for this suite. ... skipping 54 lines ... Jun 6 14:59:13.427: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-8944gsg4h 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-8944 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8944gsg4h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-8944 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8944gsg4h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Creating a StorageClass [1mSTEP[0m: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-8944 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8944gsg4h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-8944gsg4h 950885df-3a77-4ce4-a549-af008558a0bf 5408 0 2022-06-06 14:59:13 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e-kubernetes.test Update storage.k8s.io/v1 2022-06-06 14:59:13 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-7s68t pvc- provisioning-8944 3713829b-1db2-4832-9040-cacf7bf89f1a 5410 0 2022-06-06 14:59:13 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection] [{e2e-kubernetes.test Update v1 2022-06-06 14:59:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}},"f:status":{"f:phase":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8944gsg4h,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Deleting pod pod-313a216f-5b36-4a86-b675-6eeffc150e1d in namespace provisioning-8944 [1mSTEP[0m: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil} Jun 6 14:59:28.181: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-8s2pt" in namespace "provisioning-8944" to be "Succeeded or Failed" Jun 6 14:59:28.248: INFO: Pod "pvc-volume-tester-writer-8s2pt": Phase="Pending", Reason="", readiness=false. Elapsed: 66.970325ms Jun 6 14:59:30.316: INFO: Pod "pvc-volume-tester-writer-8s2pt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.134675091s [1mSTEP[0m: Saw pod success Jun 6 14:59:30.316: INFO: Pod "pvc-volume-tester-writer-8s2pt" satisfied condition "Succeeded or Failed" Jun 6 14:59:30.472: INFO: Pod pvc-volume-tester-writer-8s2pt has the following logs: Jun 6 14:59:30.473: INFO: Deleting pod "pvc-volume-tester-writer-8s2pt" in namespace "provisioning-8944" Jun 6 14:59:30.548: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-8s2pt" to be fully deleted [1mSTEP[0m: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-54-236.us-west-2.compute.internal" Jun 6 14:59:30.828: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-ldwbn" in namespace "provisioning-8944" to be "Succeeded or Failed" Jun 6 14:59:30.894: INFO: Pod "pvc-volume-tester-reader-ldwbn": Phase="Pending", Reason="", readiness=false. Elapsed: 66.036738ms Jun 6 14:59:32.963: INFO: Pod "pvc-volume-tester-reader-ldwbn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134626072s Jun 6 14:59:35.031: INFO: Pod "pvc-volume-tester-reader-ldwbn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203509732s [1mSTEP[0m: Saw pod success Jun 6 14:59:35.031: INFO: Pod "pvc-volume-tester-reader-ldwbn" satisfied condition "Succeeded or Failed" Jun 6 14:59:35.166: INFO: Pod pvc-volume-tester-reader-ldwbn has the following logs: hello world Jun 6 14:59:35.166: INFO: Deleting pod "pvc-volume-tester-reader-ldwbn" in namespace "provisioning-8944" Jun 6 14:59:35.243: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-ldwbn" to be fully deleted Jun 6 14:59:35.308: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7s68t] to have phase Bound Jun 6 14:59:35.374: INFO: PersistentVolumeClaim pvc-7s68t found and phase=Bound (65.443374ms) ... skipping 157 lines ... [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (delayed binding)] topology[0m [1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies[0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 15:00:17.791: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Jun 6 15:00:18.175: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c] Jun 6 15:00:18.175: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a] Jun 6 15:00:18.175: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 15:00:18.175: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-38436cmvs 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2c]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-3843 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-38436cmvs,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Creating sc ... skipping 108 lines ... Jun 6 15:00:02.367: INFO: Creating resource for dynamic PV Jun 6 15:00:02.367: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-8133drm5d [1mSTEP[0m: creating a claim Jun 6 15:00:02.434: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod to format volume volume-prep-provisioning-8133 Jun 6 15:00:02.693: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-8133" in namespace "provisioning-8133" to be "Succeeded or Failed" Jun 6 15:00:02.770: INFO: Pod "volume-prep-provisioning-8133": Phase="Pending", Reason="", readiness=false. Elapsed: 76.683704ms Jun 6 15:00:04.836: INFO: Pod "volume-prep-provisioning-8133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14270171s Jun 6 15:00:06.903: INFO: Pod "volume-prep-provisioning-8133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209572787s Jun 6 15:00:08.970: INFO: Pod "volume-prep-provisioning-8133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276298149s Jun 6 15:00:11.043: INFO: Pod "volume-prep-provisioning-8133": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349299101s Jun 6 15:00:13.109: INFO: Pod "volume-prep-provisioning-8133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.415751487s [1mSTEP[0m: Saw pod success Jun 6 15:00:13.109: INFO: Pod "volume-prep-provisioning-8133" satisfied condition "Succeeded or Failed" Jun 6 15:00:13.109: INFO: Deleting pod "volume-prep-provisioning-8133" in namespace "provisioning-8133" Jun 6 15:00:13.189: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-8133" to be fully deleted [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-lgg5 [1mSTEP[0m: Checking for subpath error in container status Jun 6 15:00:23.470: INFO: Deleting pod "pod-subpath-test-dynamicpv-lgg5" in namespace "provisioning-8133" Jun 6 15:00:23.544: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-lgg5" to be fully deleted [1mSTEP[0m: Deleting pod Jun 6 15:00:23.610: INFO: Deleting pod "pod-subpath-test-dynamicpv-lgg5" in namespace "provisioning-8133" [1mSTEP[0m: Deleting pvc Jun 6 15:00:23.808: INFO: Deleting PersistentVolumeClaim "awsbtjrh" ... skipping 509 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 172 lines ... Jun 6 15:01:37.743: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 6 15:01:38.285: INFO: Successfully created a new PD: "aws://us-west-2a/vol-0abebdbf12f015dec". Jun 6 15:01:38.285: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-cnts [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 15:01:38.351: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-cnts" in namespace "volume-9942" to be "Succeeded or Failed" Jun 6 15:01:38.414: INFO: Pod "exec-volume-test-inlinevolume-cnts": Phase="Pending", Reason="", readiness=false. Elapsed: 62.109068ms Jun 6 15:01:40.480: INFO: Pod "exec-volume-test-inlinevolume-cnts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128212334s Jun 6 15:01:42.542: INFO: Pod "exec-volume-test-inlinevolume-cnts": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19078749s Jun 6 15:01:44.645: INFO: Pod "exec-volume-test-inlinevolume-cnts": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293654653s Jun 6 15:01:46.709: INFO: Pod "exec-volume-test-inlinevolume-cnts": Phase="Pending", Reason="", readiness=false. Elapsed: 8.357380819s Jun 6 15:01:48.772: INFO: Pod "exec-volume-test-inlinevolume-cnts": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.420244883s [1mSTEP[0m: Saw pod success Jun 6 15:01:48.772: INFO: Pod "exec-volume-test-inlinevolume-cnts" satisfied condition "Succeeded or Failed" Jun 6 15:01:48.834: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-inlinevolume-cnts container exec-container-inlinevolume-cnts: <nil> [1mSTEP[0m: delete the pod Jun 6 15:01:48.993: INFO: Waiting for pod exec-volume-test-inlinevolume-cnts to disappear Jun 6 15:01:49.056: INFO: Pod exec-volume-test-inlinevolume-cnts no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-cnts Jun 6 15:01:49.056: INFO: Deleting pod "exec-volume-test-inlinevolume-cnts" in namespace "volume-9942" Jun 6 15:01:49.259: INFO: Couldn't delete PD "aws://us-west-2a/vol-0abebdbf12f015dec", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0abebdbf12f015dec is currently attached to i-0cd903533a771549c status code: 400, request id: 89bb873e-da02-46a1-b123-8571eb117692 Jun 6 15:01:54.645: INFO: Couldn't delete PD "aws://us-west-2a/vol-0abebdbf12f015dec", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0abebdbf12f015dec is currently attached to i-0cd903533a771549c status code: 400, request id: a873c7a6-322b-4d88-b388-76f86a89ccf6 Jun 6 15:02:00.001: INFO: Couldn't delete PD "aws://us-west-2a/vol-0abebdbf12f015dec", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0abebdbf12f015dec is currently attached to i-0cd903533a771549c status code: 400, request id: fb413eb0-04e1-4949-be80-64273b0a33c1 Jun 6 15:02:05.451: INFO: Successfully deleted PD "aws://us-west-2a/vol-0abebdbf12f015dec". [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 15:02:05.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-9942" for this suite. ... skipping 208 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 75 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDistro debian doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 143 lines ... Jun 6 15:02:24.678: INFO: PersistentVolumeClaim pvc-l2tdp found but phase is Pending instead of Bound. Jun 6 15:02:26.744: INFO: PersistentVolumeClaim pvc-l2tdp found and phase=Bound (12.469718892s) Jun 6 15:02:26.744: INFO: Waiting up to 3m0s for PersistentVolume aws-zbfqn to have phase Bound Jun 6 15:02:26.809: INFO: PersistentVolume aws-zbfqn found and phase=Bound (65.191653ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-mcpx [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 15:02:27.018: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-mcpx" in namespace "volume-9566" to be "Succeeded or Failed" Jun 6 15:02:27.085: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 66.96445ms Jun 6 15:02:29.151: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132754095s Jun 6 15:02:31.217: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198756204s Jun 6 15:02:33.283: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26465042s Jun 6 15:02:35.350: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331738121s Jun 6 15:02:37.416: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.39831867s Jun 6 15:02:39.490: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.472025187s Jun 6 15:02:41.557: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.538766076s Jun 6 15:02:43.624: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.605650265s Jun 6 15:02:45.689: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.670801192s [1mSTEP[0m: Saw pod success Jun 6 15:02:45.689: INFO: Pod "exec-volume-test-preprovisionedpv-mcpx" satisfied condition "Succeeded or Failed" Jun 6 15:02:45.754: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-mcpx container exec-container-preprovisionedpv-mcpx: <nil> [1mSTEP[0m: delete the pod Jun 6 15:02:45.893: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mcpx to disappear Jun 6 15:02:45.958: INFO: Pod exec-volume-test-preprovisionedpv-mcpx no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-mcpx Jun 6 15:02:45.958: INFO: Deleting pod "exec-volume-test-preprovisionedpv-mcpx" in namespace "volume-9566" [1mSTEP[0m: Deleting pv and pvc Jun 6 15:02:46.028: INFO: Deleting PersistentVolumeClaim "pvc-l2tdp" Jun 6 15:02:46.095: INFO: Deleting PersistentVolume "aws-zbfqn" Jun 6 15:02:46.319: INFO: Couldn't delete PD "aws://us-west-2a/vol-0ff830d29b6950e80", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff830d29b6950e80 is currently attached to i-0cd903533a771549c status code: 400, request id: 2e3b2db3-5778-4ef1-802f-18fc2c5cbc63 Jun 6 15:02:51.697: INFO: Couldn't delete PD "aws://us-west-2a/vol-0ff830d29b6950e80", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff830d29b6950e80 is currently attached to i-0cd903533a771549c status code: 400, request id: ebe790cc-c9ff-48e4-97c9-35718e8384c8 Jun 6 15:02:57.063: INFO: Couldn't delete PD "aws://us-west-2a/vol-0ff830d29b6950e80", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff830d29b6950e80 is currently attached to i-0cd903533a771549c status code: 400, request id: 2c8a0035-e250-456b-8987-c52b6cfce364 Jun 6 15:03:02.431: INFO: Couldn't delete PD "aws://us-west-2a/vol-0ff830d29b6950e80", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ff830d29b6950e80 is currently attached to i-0cd903533a771549c status code: 400, request id: a2b4ffee-b6b7-404a-a1e1-a2adb4cd3f73 Jun 6 15:03:07.897: INFO: Successfully deleted PD "aws://us-west-2a/vol-0ff830d29b6950e80". [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 15:03:07.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-9566" for this suite. ... skipping 28 lines ... Jun 6 15:02:49.116: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-1828dzghw [1mSTEP[0m: creating a claim Jun 6 15:02:49.183: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-4l88 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 15:02:49.392: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-4l88" in namespace "volume-1828" to be "Succeeded or Failed" Jun 6 15:02:49.457: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 64.88331ms Jun 6 15:02:51.524: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131837621s Jun 6 15:02:53.591: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198723703s Jun 6 15:02:55.658: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265623949s Jun 6 15:02:57.725: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332586482s Jun 6 15:02:59.795: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40266461s Jun 6 15:03:01.866: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Pending", Reason="", readiness=false. Elapsed: 12.473365987s Jun 6 15:03:03.933: INFO: Pod "exec-volume-test-dynamicpv-4l88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.540530932s [1mSTEP[0m: Saw pod success Jun 6 15:03:03.933: INFO: Pod "exec-volume-test-dynamicpv-4l88" satisfied condition "Succeeded or Failed" Jun 6 15:03:03.998: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-dynamicpv-4l88 container exec-container-dynamicpv-4l88: <nil> [1mSTEP[0m: delete the pod Jun 6 15:03:04.139: INFO: Waiting for pod exec-volume-test-dynamicpv-4l88 to disappear Jun 6 15:03:04.205: INFO: Pod exec-volume-test-dynamicpv-4l88 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-4l88 Jun 6 15:03:04.205: INFO: Deleting pod "exec-volume-test-dynamicpv-4l88" in namespace "volume-1828" ... skipping 311 lines ... Jun 6 15:03:15.380: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 6 15:03:16.030: INFO: Successfully created a new PD: "aws://us-west-2a/vol-05436ddb8b8a98ece". Jun 6 15:03:16.030: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-9jhv [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 15:03:16.099: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-9jhv" in namespace "volume-1993" to be "Succeeded or Failed" Jun 6 15:03:16.161: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 62.298263ms Jun 6 15:03:18.225: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125543383s Jun 6 15:03:20.293: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194129196s Jun 6 15:03:22.356: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256905016s Jun 6 15:03:24.420: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32078158s Jun 6 15:03:26.483: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.383684349s Jun 6 15:03:28.593: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.493939438s Jun 6 15:03:30.657: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.557556949s Jun 6 15:03:32.724: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.625280717s Jun 6 15:03:34.797: INFO: Pod "exec-volume-test-inlinevolume-9jhv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.697916312s [1mSTEP[0m: Saw pod success Jun 6 15:03:34.797: INFO: Pod "exec-volume-test-inlinevolume-9jhv" satisfied condition "Succeeded or Failed" Jun 6 15:03:34.863: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-inlinevolume-9jhv container exec-container-inlinevolume-9jhv: <nil> [1mSTEP[0m: delete the pod Jun 6 15:03:35.004: INFO: Waiting for pod exec-volume-test-inlinevolume-9jhv to disappear Jun 6 15:03:35.066: INFO: Pod exec-volume-test-inlinevolume-9jhv no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-9jhv Jun 6 15:03:35.066: INFO: Deleting pod "exec-volume-test-inlinevolume-9jhv" in namespace "volume-1993" Jun 6 15:03:35.278: INFO: Couldn't delete PD "aws://us-west-2a/vol-05436ddb8b8a98ece", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05436ddb8b8a98ece is currently attached to i-0cd903533a771549c status code: 400, request id: 2f3dcb2d-4e97-4079-ab51-a8cfdbb53f1d Jun 6 15:03:40.663: INFO: Couldn't delete PD "aws://us-west-2a/vol-05436ddb8b8a98ece", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05436ddb8b8a98ece is currently attached to i-0cd903533a771549c status code: 400, request id: 69860ea7-861e-4ed1-9e23-1a1c8f621329 Jun 6 15:03:46.099: INFO: Successfully deleted PD "aws://us-west-2a/vol-05436ddb8b8a98ece". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 15:03:46.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-1993" for this suite. ... skipping 239 lines ... [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (immediate binding)] topology[0m [1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies[0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 15:03:56.680: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Jun 6 15:03:57.079: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c] Jun 6 15:03:57.079: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a] Jun 6 15:03:57.079: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 15:03:57.079: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-2079672rq 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-2079 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-2079672rq,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Creating sc ... skipping 34 lines ... [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 15:03:32.487: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240 Jun 6 15:03:32.816: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 15:03:32.816: INFO: Creating resource for dynamic PV Jun 6 15:03:32.816: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-8839k5jks [1mSTEP[0m: creating a claim Jun 6 15:03:32.883: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5hh4 [1mSTEP[0m: Checking for subpath error in container status Jun 6 15:03:49.217: INFO: Deleting pod "pod-subpath-test-dynamicpv-5hh4" in namespace "provisioning-8839" Jun 6 15:03:49.287: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-5hh4" to be fully deleted [1mSTEP[0m: Deleting pod Jun 6 15:04:01.420: INFO: Deleting pod "pod-subpath-test-dynamicpv-5hh4" in namespace "provisioning-8839" [1mSTEP[0m: Deleting pvc Jun 6 15:04:01.651: INFO: Deleting PersistentVolumeClaim "awsl7cjq" ... skipping 11 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath directory is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (ext4)] volumes[0m [1mshould allow exec of files on the volume[0m ... skipping 14 lines ... Jun 6 15:03:47.067: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-4982nhwlg [1mSTEP[0m: creating a claim Jun 6 15:03:47.130: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-9k99 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 15:03:47.321: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-9k99" in namespace "volume-4982" to be "Succeeded or Failed" Jun 6 15:03:47.383: INFO: Pod "exec-volume-test-dynamicpv-9k99": Phase="Pending", Reason="", readiness=false. Elapsed: 62.204973ms Jun 6 15:03:49.446: INFO: Pod "exec-volume-test-dynamicpv-9k99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12573021s Jun 6 15:03:51.508: INFO: Pod "exec-volume-test-dynamicpv-9k99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187849863s Jun 6 15:03:53.571: INFO: Pod "exec-volume-test-dynamicpv-9k99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250393165s Jun 6 15:03:55.634: INFO: Pod "exec-volume-test-dynamicpv-9k99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313652705s Jun 6 15:03:57.697: INFO: Pod "exec-volume-test-dynamicpv-9k99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.376041725s [1mSTEP[0m: Saw pod success Jun 6 15:03:57.697: INFO: Pod "exec-volume-test-dynamicpv-9k99" satisfied condition "Succeeded or Failed" Jun 6 15:03:57.759: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-dynamicpv-9k99 container exec-container-dynamicpv-9k99: <nil> [1mSTEP[0m: delete the pod Jun 6 15:03:57.903: INFO: Waiting for pod exec-volume-test-dynamicpv-9k99 to disappear Jun 6 15:03:57.965: INFO: Pod exec-volume-test-dynamicpv-9k99 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-9k99 Jun 6 15:03:57.965: INFO: Deleting pod "exec-volume-test-dynamicpv-9k99" in namespace "volume-4982" ... skipping 175 lines ... Jun 6 15:04:08.022: INFO: PersistentVolumeClaim pvc-lg4zk found but phase is Pending instead of Bound. Jun 6 15:04:10.088: INFO: PersistentVolumeClaim pvc-lg4zk found and phase=Bound (2.131149928s) Jun 6 15:04:10.088: INFO: Waiting up to 3m0s for PersistentVolume aws-qwk2k to have phase Bound Jun 6 15:04:10.153: INFO: PersistentVolume aws-qwk2k found and phase=Bound (65.042471ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-dw2w [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 6 15:04:10.352: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-dw2w" in namespace "volume-6566" to be "Succeeded or Failed" Jun 6 15:04:10.418: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 65.975277ms Jun 6 15:04:12.484: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132496309s Jun 6 15:04:14.551: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198521865s Jun 6 15:04:16.617: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264617925s Jun 6 15:04:18.684: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332223276s Jun 6 15:04:20.751: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398517415s [1mSTEP[0m: Saw pod success Jun 6 15:04:20.751: INFO: Pod "exec-volume-test-preprovisionedpv-dw2w" satisfied condition "Succeeded or Failed" Jun 6 15:04:20.816: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-dw2w container exec-container-preprovisionedpv-dw2w: <nil> [1mSTEP[0m: delete the pod Jun 6 15:04:20.956: INFO: Waiting for pod exec-volume-test-preprovisionedpv-dw2w to disappear Jun 6 15:04:21.021: INFO: Pod exec-volume-test-preprovisionedpv-dw2w no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-dw2w Jun 6 15:04:21.021: INFO: Deleting pod "exec-volume-test-preprovisionedpv-dw2w" in namespace "volume-6566" [1mSTEP[0m: Deleting pv and pvc Jun 6 15:04:21.086: INFO: Deleting PersistentVolumeClaim "pvc-lg4zk" Jun 6 15:04:21.154: INFO: Deleting PersistentVolume "aws-qwk2k" Jun 6 15:04:21.379: INFO: Couldn't delete PD "aws://us-west-2a/vol-0871ac73e729b6200", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0871ac73e729b6200 is currently attached to i-0cd903533a771549c status code: 400, request id: af9c4f49-fc77-4ef1-a6da-5d4be8866cf2 Jun 6 15:04:26.809: INFO: Couldn't delete PD "aws://us-west-2a/vol-0871ac73e729b6200", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0871ac73e729b6200 is currently attached to i-0cd903533a771549c status code: 400, request id: 5a5fe173-7769-45e2-a211-2969a7c076c3 Jun 6 15:04:32.380: INFO: Successfully deleted PD "aws://us-west-2a/vol-0871ac73e729b6200". [AfterEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 15:04:32.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-6566" for this suite. ... skipping 333 lines ... Jun 6 15:04:32.850: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-43725p7bq [1mSTEP[0m: creating a claim Jun 6 15:04:32.917: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ttbk [1mSTEP[0m: Creating a pod to test subpath Jun 6 15:04:33.119: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ttbk" in namespace "provisioning-4372" to be "Succeeded or Failed" Jun 6 15:04:33.185: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 65.756261ms Jun 6 15:04:35.252: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132257437s Jun 6 15:04:37.317: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197995289s Jun 6 15:04:39.384: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265102936s Jun 6 15:04:41.451: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331603936s Jun 6 15:04:43.524: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40471192s Jun 6 15:04:45.591: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.471247491s Jun 6 15:04:47.657: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.538216131s Jun 6 15:04:49.724: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.604712919s [1mSTEP[0m: Saw pod success Jun 6 15:04:49.724: INFO: Pod "pod-subpath-test-dynamicpv-ttbk" satisfied condition "Succeeded or Failed" Jun 6 15:04:49.789: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-ttbk container test-container-subpath-dynamicpv-ttbk: <nil> [1mSTEP[0m: delete the pod Jun 6 15:04:50.084: INFO: Waiting for pod pod-subpath-test-dynamicpv-ttbk to disappear Jun 6 15:04:50.149: INFO: Pod pod-subpath-test-dynamicpv-ttbk no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-ttbk Jun 6 15:04:50.149: INFO: Deleting pod "pod-subpath-test-dynamicpv-ttbk" in namespace "provisioning-4372" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ttbk [1mSTEP[0m: Creating a pod to test subpath Jun 6 15:04:50.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ttbk" in namespace "provisioning-4372" to be "Succeeded or Failed" Jun 6 15:04:50.350: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 65.565689ms Jun 6 15:04:52.416: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131640893s Jun 6 15:04:54.483: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1985894s Jun 6 15:04:56.549: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264668154s Jun 6 15:04:58.615: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331153794s Jun 6 15:05:00.682: INFO: Pod "pod-subpath-test-dynamicpv-ttbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397918838s [1mSTEP[0m: Saw pod success Jun 6 15:05:00.682: INFO: Pod "pod-subpath-test-dynamicpv-ttbk" satisfied condition "Succeeded or Failed" Jun 6 15:05:00.750: INFO: Trying to get logs from node ip-172-20-54-236.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-ttbk container test-container-subpath-dynamicpv-ttbk: <nil> [1mSTEP[0m: delete the pod Jun 6 15:05:00.905: INFO: Waiting for pod pod-subpath-test-dynamicpv-ttbk to disappear Jun 6 15:05:00.971: INFO: Pod pod-subpath-test-dynamicpv-ttbk no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-ttbk Jun 6 15:05:00.971: INFO: Deleting pod "pod-subpath-test-dynamicpv-ttbk" in namespace "provisioning-4372" ... skipping 34 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 131 lines ... [Testpattern: Dynamic PV (immediate binding)] topology [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should provision a volume and schedule a pod with AllowedTopologies [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 15:05:10.097: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267 Jun 6 15:05:10.421: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 15:05:10.421: INFO: Creating resource for dynamic PV Jun 6 15:05:10.421: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-7533xz4gv [1mSTEP[0m: creating a claim Jun 6 15:05:10.486: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-7xbh [1mSTEP[0m: Checking for subpath error in container status Jun 6 15:05:20.810: INFO: Deleting pod "pod-subpath-test-dynamicpv-7xbh" in namespace "provisioning-7533" Jun 6 15:05:20.881: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-7xbh" to be fully deleted [1mSTEP[0m: Deleting pod Jun 6 15:05:31.008: INFO: Deleting pod "pod-subpath-test-dynamicpv-7xbh" in namespace "provisioning-7533" [1mSTEP[0m: Deleting pvc Jun 6 15:05:31.196: INFO: Deleting PersistentVolumeClaim "aws4g9zt" ... skipping 11 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 ... skipping 94 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver supports dynamic provisioning, skipping InlineVolume pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 20 lines ... /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath file is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 6 15:05:15.631: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-24861.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256 Jun 6 15:05:15.958: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 6 15:05:15.958: INFO: Creating resource for dynamic PV Jun 6 15:05:15.958: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-534999rnl [1mSTEP[0m: creating a claim Jun 6 15:05:16.024: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-v8bf [1mSTEP[0m: Checking for subpath error in container status Jun 6 15:05:32.362: INFO: Deleting pod "pod-subpath-test-dynamicpv-v8bf" in namespace "provisioning-5349" Jun 6 15:05:32.431: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-v8bf" to be fully deleted [1mSTEP[0m: Deleting pod Jun 6 15:05:42.563: INFO: Deleting pod "pod-subpath-test-dynamicpv-v8bf" in namespace "provisioning-5349" [1mSTEP[0m: Deleting pvc Jun 6 15:05:42.760: INFO: Deleting PersistentVolumeClaim "awsb7ff7" ... skipping 12 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath file is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Inline-volume (xfs)][Slow] volumes[0m [1mshould store data[0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m ... skipping 160 lines ... Jun 6 15:06:18.189: INFO: Waiting for pod aws-client to disappear Jun 6 15:06:18.255: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws [1mSTEP[0m: Deleting pv and pvc Jun 6 15:06:18.255: INFO: Deleting PersistentVolumeClaim "pvc-n9mnw" Jun 6 15:06:18.323: INFO: Deleting PersistentVolume "aws-s7f7d" Jun 6 15:06:18.533: INFO: Couldn't delete PD "aws://us-west-2a/vol-086bc1f5e054c2eee", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-086bc1f5e054c2eee is currently attached to i-0cd903533a771549c status code: 400, request id: 38570e08-37c3-4f28-b68c-a57d19582246 Jun 6 15:06:23.900: INFO: Couldn't delete PD "aws://us-west-2a/vol-086bc1f5e054c2eee", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-086bc1f5e054c2eee is currently attached to i-0cd903533a771549c status code: 400, request id: ac5e1c80-cbe4-42cf-b62c-41f08d0980c1 Jun 6 15:06:29.377: INFO: Couldn't delete PD "aws://us-west-2a/vol-086bc1f5e054c2eee", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-086bc1f5e054c2eee is currently attached to i-0cd903533a771549c status code: 400, request id: 73e40b4a-ea35-4a0a-81cf-354cff15cf89 Jun 6 15:06:34.793: INFO: Successfully deleted PD "aws://us-west-2a/vol-086bc1f5e054c2eee". [AfterEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 6 15:06:34.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-942" for this suite. ... skipping 8 lines ... [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m [1m[32mRan 69 of 485 Specs in 813.453 seconds[0m [1m[32mSUCCESS![0m -- [32m[1m69 Passed[0m | [91m[1m0 Failed[0m | [33m[1m0 Pending[0m | [36m[1m416 Skipped[0m Ginkgo ran 1 suite in 20m27.922693877s Test Suite Passed + TEST_PASSED=0 + set -e ... skipping 134 lines ... InstanceId: "i-0cd903533a771549c", State: "attaching", VolumeId: "vol-0eb6d11bbba5d6d62" } I0606 14:53:28.228314 1 cloud.go:610] Waiting for volume "vol-0eb6d11bbba5d6d62" state: actual=attaching, desired=attached I0606 14:53:28.448566 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-03ee244160d31b028 E0606 14:53:28.448609 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-03ee244160d31b028" to node "i-0cd903533a771549c": could not attach volume "vol-03ee244160d31b028" to node "i-0cd903533a771549c": IncorrectState: vol-03ee244160d31b028 is not 'available'. status code: 400, request id: 1c6f1ff3-70f5-41eb-b188-44f612ffc92c I0606 14:53:28.455029 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-03ee244160d31b028 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:53:29.024823 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-03ee244160d31b028 E0606 14:53:29.024889 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-03ee244160d31b028" to node "i-0cd903533a771549c": could not attach volume "vol-03ee244160d31b028" to node "i-0cd903533a771549c": IncorrectState: vol-03ee244160d31b028 is not 'available'. status code: 400, request id: 10e17b8e-0778-457e-8a77-3dd020c419c8 I0606 14:53:29.310021 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0eb6d11bbba5d6d62 I0606 14:53:29.310041 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-0eb6d11bbba5d6d62 attached to node i-0cd903533a771549c through device /dev/xvdbb I0606 14:53:29.455485 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-03ee244160d31b028 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:53:30.049201 1 cloud.go:437] [Debug] AttachVolume volume="vol-03ee244160d31b028" instance="i-0cd903533a771549c" request returned { AttachTime: 2022-06-06 14:53:30.036 +0000 UTC, Device: "/dev/xvdba", InstanceId: "i-0cd903533a771549c", State: "attaching", VolumeId: "vol-03ee244160d31b028" } I0606 14:53:30.111121 1 cloud.go:610] Waiting for volume "vol-03ee244160d31b028" state: actual=attaching, desired=attached E0606 14:53:30.618269 1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-0a9951bd3c0e88c5e" vs "vol-03ee244160d31b028" I0606 14:53:30.618288 1 controller.go:388] [Debug] ControllerUnpublishVolume: volume vol-0a9951bd3c0e88c5e detached from node i-0cd903533a771549c I0606 14:53:30.633783 1 controller.go:374] ControllerUnpublishVolume: called with args {VolumeId:vol-0a9951bd3c0e88c5e NodeId:i-0cd903533a771549c Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} W0606 14:53:30.676428 1 cloud.go:484] DetachDisk called on non-attached volume: vol-0a9951bd3c0e88c5e I0606 14:53:31.118389 1 controller.go:282] DeleteVolume: called with args: {VolumeId:vol-0a9951bd3c0e88c5e Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:53:31.186918 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-03ee244160d31b028 I0606 14:53:31.186938 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-03ee244160d31b028 attached to node i-0cd903533a771549c through device /dev/xvdba ... skipping 309 lines ... I0606 14:55:44.973127 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-0a6cd0f34f9b4f4dd attached to node i-0cd903533a771549c through device /dev/xvdba I0606 14:55:44.983556 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0a6cd0f34f9b4f4dd NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:55:45.188019 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-0a6cd0f34f9b4f4dd attached to node i-0cd903533a771549c through device /dev/xvdba I0606 14:55:47.745781 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0e3d4c87c7eae0bbc NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:55:48.046891 1 controller.go:102] CreateVolume: called with args {Name:pvc-0fa2d431-c695-47f8-917d-97a253d172b5 CapacityRange:required_bytes:1073741824 VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[csi.storage.k8s.io/pv/name:pvc-0fa2d431-c695-47f8-917d-97a253d172b5 csi.storage.k8s.io/pvc/name:aws4bj6l csi.storage.k8s.io/pvc/namespace:provisioning-1349] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:55:48.285706 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0e3d4c87c7eae0bbc E0606 14:55:48.285744 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0e3d4c87c7eae0bbc" to node "i-0cd903533a771549c": could not attach volume "vol-0e3d4c87c7eae0bbc" to node "i-0cd903533a771549c": IncorrectState: vol-0e3d4c87c7eae0bbc is not 'available'. status code: 400, request id: eed501e3-d527-4d26-abf2-8ad75a7bd782 I0606 14:55:48.293178 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0e3d4c87c7eae0bbc NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:55:48.841943 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0e3d4c87c7eae0bbc E0606 14:55:48.841978 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0e3d4c87c7eae0bbc" to node "i-0cd903533a771549c": could not attach volume "vol-0e3d4c87c7eae0bbc" to node "i-0cd903533a771549c": IncorrectState: vol-0e3d4c87c7eae0bbc is not 'available'. status code: 400, request id: 1e76ce92-ba3e-4a1d-8612-0efd6931a951 I0606 14:55:49.294125 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0e3d4c87c7eae0bbc NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:55:49.893453 1 cloud.go:437] [Debug] AttachVolume volume="vol-0e3d4c87c7eae0bbc" instance="i-0cd903533a771549c" request returned { AttachTime: 2022-06-06 14:55:49.882 +0000 UTC, Device: "/dev/xvdbc", InstanceId: "i-0cd903533a771549c", ... skipping 245 lines ... I0606 14:57:32.244520 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-08fe259c62072ab5b I0606 14:57:32.244542 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-08fe259c62072ab5b attached to node i-0cd903533a771549c through device /dev/xvdbd I0606 14:57:35.954888 1 controller.go:282] DeleteVolume: called with args: {VolumeId:vol-00d036c58f0524f6f Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:36.085130 1 inflight.go:73] Node Service: volume="vol-00d036c58f0524f6f" operation finished I0606 14:57:36.396924 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-02bc806ba40e45835 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:36.943357 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-02bc806ba40e45835 E0606 14:57:36.943387 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-02bc806ba40e45835" to node "i-0cd903533a771549c": could not attach volume "vol-02bc806ba40e45835" to node "i-0cd903533a771549c": IncorrectState: vol-02bc806ba40e45835 is not 'available'. status code: 400, request id: 2eead0f4-eeae-4d8e-a23f-8b5bbc5fd218 I0606 14:57:36.949463 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-02bc806ba40e45835 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:37.057703 1 controller.go:388] [Debug] ControllerUnpublishVolume: volume vol-0a89900b06b881924 detached from node i-0cd903533a771549c I0606 14:57:37.072771 1 controller.go:374] ControllerUnpublishVolume: called with args {VolumeId:vol-0a89900b06b881924 NodeId:i-0cd903533a771549c Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:37.075387 1 controller.go:102] CreateVolume: called with args {Name:pvc-ef87c6a8-298a-4cd1-ade2-ad8ac2c95984 CapacityRange:required_bytes:1073741824 VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[csi.storage.k8s.io/pv/name:pvc-ef87c6a8-298a-4cd1-ade2-ad8ac2c95984 csi.storage.k8s.io/pvc/name:awsq7vhn csi.storage.k8s.io/pvc/namespace:provisioning-870] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} W0606 14:57:37.127334 1 cloud.go:484] DetachDisk called on non-attached volume: vol-0a89900b06b881924 I0606 14:57:37.596424 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-02bc806ba40e45835 E0606 14:57:37.596462 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-02bc806ba40e45835" to node "i-0cd903533a771549c": could not attach volume "vol-02bc806ba40e45835" to node "i-0cd903533a771549c": IncorrectState: vol-02bc806ba40e45835 is not 'available'. status code: 400, request id: ede07be1-303b-4188-8aa0-e30a95596f0e I0606 14:57:37.949328 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-02bc806ba40e45835 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:38.852626 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-02bc806ba40e45835 E0606 14:57:38.852660 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-02bc806ba40e45835" to node "i-0cd903533a771549c": could not attach volume "vol-02bc806ba40e45835" to node "i-0cd903533a771549c": IncorrectState: vol-02bc806ba40e45835 is not 'available'. status code: 400, request id: 234b1ea5-6ff4-4112-bf29-98e309870550 I0606 14:57:39.790829 1 controller.go:282] DeleteVolume: called with args: {VolumeId:vol-0a89900b06b881924 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:39.936744 1 inflight.go:73] Node Service: volume="vol-0a89900b06b881924" operation finished I0606 14:57:40.432422 1 inflight.go:73] Node Service: volume="pvc-ef87c6a8-298a-4cd1-ade2-ad8ac2c95984" operation finished I0606 14:57:41.101187 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0cd4bfa31f5925434 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 14:57:41.754611 1 cloud.go:437] [Debug] AttachVolume volume="vol-0cd4bfa31f5925434" instance="i-0cd903533a771549c" request returned { ... skipping 429 lines ... I0606 15:01:36.191098 1 cloud.go:610] Waiting for volume "vol-0bc8e02a06122e03a" state: actual=detaching, desired=detached I0606 15:01:36.515945 1 controller.go:282] DeleteVolume: called with args: {VolumeId:vol-0bd7e5d02f307eaf8 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:01:36.651555 1 inflight.go:73] Node Service: volume="vol-0bd7e5d02f307eaf8" operation finished I0606 15:01:38.054930 1 cloud.go:610] Waiting for volume "vol-0bc8e02a06122e03a" state: actual=detaching, desired=detached I0606 15:01:38.368936 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0abebdbf12f015dec NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:01:38.989723 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0abebdbf12f015dec E0606 15:01:38.989754 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0abebdbf12f015dec" to node "i-0cd903533a771549c": could not attach volume "vol-0abebdbf12f015dec" to node "i-0cd903533a771549c": IncorrectState: vol-0abebdbf12f015dec is not 'available'. status code: 400, request id: 29196639-4181-4e2d-9120-6f7870566a6f I0606 15:01:38.996382 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0abebdbf12f015dec NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:01:39.561206 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0abebdbf12f015dec E0606 15:01:39.561237 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-0abebdbf12f015dec" to node "i-0cd903533a771549c": could not attach volume "vol-0abebdbf12f015dec" to node "i-0cd903533a771549c": IncorrectState: vol-0abebdbf12f015dec is not 'available'. status code: 400, request id: 4bec9949-453b-471f-a705-c8b8418af8f2 I0606 15:01:39.996462 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0abebdbf12f015dec NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:01:40.636038 1 cloud.go:437] [Debug] AttachVolume volume="vol-0abebdbf12f015dec" instance="i-0cd903533a771549c" request returned { AttachTime: 2022-06-06 15:01:40.616 +0000 UTC, Device: "/dev/xvdba", InstanceId: "i-0cd903533a771549c", ... skipping 219 lines ... I0606 15:03:13.143879 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-07d12a705cad508e2 I0606 15:03:13.143900 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-07d12a705cad508e2 attached to node i-0cd903533a771549c through device /dev/xvdbb I0606 15:03:15.732345 1 controller.go:102] CreateVolume: called with args {Name:pvc-72c01292-ebc4-451d-a63e-84b6182fce01 CapacityRange:required_bytes:1073741824 VolumeCapabilities:[block:<> access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[csi.storage.k8s.io/pv/name:pvc-72c01292-ebc4-451d-a63e-84b6182fce01 csi.storage.k8s.io/pvc/name:awsjw6q4 csi.storage.k8s.io/pvc/namespace:multivolume-6827] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:03:15.738866 1 controller.go:102] CreateVolume: called with args {Name:pvc-16c8a779-1c17-4c47-88a0-ea174c4f7c7a CapacityRange:required_bytes:1073741824 VolumeCapabilities:[block:<> access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[csi.storage.k8s.io/pv/name:pvc-16c8a779-1c17-4c47-88a0-ea174c4f7c7a csi.storage.k8s.io/pvc/name:awslkrb9 csi.storage.k8s.io/pvc/namespace:multivolume-6827] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2a" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2b" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"us-west-2c" > > XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:03:16.092681 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-05436ddb8b8a98ece NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:03:16.676134 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-05436ddb8b8a98ece E0606 15:03:16.676164 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-05436ddb8b8a98ece" to node "i-0cd903533a771549c": could not attach volume "vol-05436ddb8b8a98ece" to node "i-0cd903533a771549c": IncorrectState: vol-05436ddb8b8a98ece is not 'available'. status code: 400, request id: 87e8bc8b-2799-44bb-9466-5b39348144ac I0606 15:03:16.688019 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-05436ddb8b8a98ece NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:03:17.254748 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-05436ddb8b8a98ece E0606 15:03:17.254794 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-05436ddb8b8a98ece" to node "i-0cd903533a771549c": could not attach volume "vol-05436ddb8b8a98ece" to node "i-0cd903533a771549c": IncorrectState: vol-05436ddb8b8a98ece is not 'available'. status code: 400, request id: 6c4e1c49-79ab-46ba-ad29-2742275c1ff6 I0606 15:03:17.687971 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-05436ddb8b8a98ece NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:03:18.356278 1 cloud.go:437] [Debug] AttachVolume volume="vol-05436ddb8b8a98ece" instance="i-0cd903533a771549c" request returned { AttachTime: 2022-06-06 15:03:18.339 +0000 UTC, Device: "/dev/xvdbc", InstanceId: "i-0cd903533a771549c", ... skipping 268 lines ... I0606 15:05:16.498440 1 controller.go:343] [Debug] ControllerPublishVolume: volume vol-028ab37e919ac37dd attached to node i-0cd903533a771549c through device /dev/xvdba I0606 15:05:17.033093 1 controller.go:374] ControllerUnpublishVolume: called with args {VolumeId:vol-0025e0576711507c3 NodeId:i-08e946b39887a82a2 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:05:17.454744 1 cloud.go:610] Waiting for volume "vol-0025e0576711507c3" state: actual=detaching, desired=detached I0606 15:05:18.436993 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-041200cba69eec62b NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:05:18.524655 1 cloud.go:610] Waiting for volume "vol-0025e0576711507c3" state: actual=detaching, desired=detached I0606 15:05:19.101413 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-041200cba69eec62b E0606 15:05:19.101449 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-041200cba69eec62b" to node "i-0cd903533a771549c": could not attach volume "vol-041200cba69eec62b" to node "i-0cd903533a771549c": IncorrectState: vol-041200cba69eec62b is not 'available'. status code: 400, request id: d2b3b835-2bb9-4089-8436-f957cd80a6f6 I0606 15:05:19.109673 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-041200cba69eec62b NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:05:19.560449 1 inflight.go:73] Node Service: volume="pvc-911ddb04-ea04-403e-9a76-54c9f91ba99d" operation finished I0606 15:05:19.650350 1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-041200cba69eec62b E0606 15:05:19.650393 1 driver.go:120] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-041200cba69eec62b" to node "i-0cd903533a771549c": could not attach volume "vol-041200cba69eec62b" to node "i-0cd903533a771549c": IncorrectState: vol-041200cba69eec62b is not 'available'. status code: 400, request id: 33857745-f316-47e5-9da8-9d2ef7079207 I0606 15:05:20.110401 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-041200cba69eec62b NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:05:20.270863 1 controller.go:315] ControllerPublishVolume: called with args {VolumeId:vol-0e2e4fbefd1121941 NodeId:i-0cd903533a771549c VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0606 15:05:20.395989 1 cloud.go:610] Waiting for volume "vol-0025e0576711507c3" state: actual=detaching, desired=detached I0606 15:05:20.686156 1 cloud.go:437] [Debug] AttachVolume volume="vol-041200cba69eec62b" instance="i-0cd903533a771549c" request returned { AttachTime: 2022-06-06 15:05:20.674 +0000 UTC, ... skipping 3009 lines ...