Recent runs || View in Spyglass
PR | torredil: Use minimal base image for linux builds |
Result | ABORTED |
Tests | 0 failed / 69 succeeded |
Started | |
Elapsed | 27m51s |
Revision | 77f242177913b1cd163428d72251c44197316384 |
Refs |
1233 |
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow][LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node [LinuxOnly]
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion
EBS CSI Migration Suite [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] verify volume status after node power off
EBS CSI Migration Suite [sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive] node unregister
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:LabelSelector] Selector-Label Volume Binding:vsphere [Feature:vsphere] should bind volume with claim for given label
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] persistentvolumereclaim:vsphere [Feature:vsphere] should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] persistentvolumereclaim:vsphere [Feature:vsphere] should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
EBS CSI Migration Suite [sig-storage] PersistentVolumes [Feature:vsphere][Feature:ReclaimPolicy] persistentvolumereclaim:vsphere [Feature:vsphere] should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that a file written to the vsphere volume mount before kubelet restart can be read after restart [Disruptive]
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that a vsphere volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive]
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that deleting the Namespace of a PVC and Pod causes the successful detach of vsphere volume
EBS CSI Migration Suite [sig-storage] PersistentVolumes:vsphere [Feature:vsphere] should test that deleting the PV before the pod does not cause pod deletion to fail on vsphere volume detach
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid capability name objectSpaceReserve is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid diskStripes value is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid hostFailuresToTolerate value is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with non-vsan datastore is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values and a VSAN datastore is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid hostFailuresToTolerate and cacheReservation values is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid objectSpaceReservation and iopsLimit values is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an existing and compatible SPBM policy is honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an if a SPBM policy and VSAN capabilities cannot be honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify clean up of stale dummy VM for dynamically provisioned pvc using SPBM policy
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a SPBM policy is not honored on a non-compatible datastore for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a non-existing SPBM policy is not honored for dynamically provisioned pvc using storageclass
EBS CSI Migration Suite [sig-storage] Verify Volume Attach Through vpxd Restart [Feature:vsphere][Serial][Disruptive] verify volume remains attached through vpxd restart
EBS CSI Migration Suite [sig-storage] Volume Attach Verify [Feature:vsphere][Serial][Disruptive] verify volume remains attached after master kubelet restart
EBS CSI Migration Suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass
EBS CSI Migration Suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - thin is honored for dynamically provisioned pv using storageclass
EBS CSI Migration Suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass
EBS CSI Migration Suite [sig-storage] Volume Disk Size [Feature:vsphere] verify dynamically provisioned pv has size rounded up correctly
EBS CSI Migration Suite [sig-storage] Volume FStype [Feature:vsphere] verify fstype - default value should be ext4
EBS CSI Migration Suite [sig-storage] Volume FStype [Feature:vsphere] verify fstype - ext3 formatted volume
EBS CSI Migration Suite [sig-storage] Volume FStype [Feature:vsphere] verify invalid fstype
EBS CSI Migration Suite [sig-storage] Volume Operations Storm [Feature:vsphere] should create pod with many volumes and verify no attach call fails
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with multiple volumes from different datastore
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with multiple volumes from same datastore
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with the same volume source attach/detach to different worker nodes
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] should create and delete pod with the same volume source on the same worker node
EBS CSI Migration Suite [sig-storage] Volume Placement [Feature:vsphere] test back to back pod creation and deletion with different volume sources on the same worker node
EBS CSI Migration Suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with default parameter on clustered datastore
EBS CSI Migration Suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with spbm policy on clustered datastore
EBS CSI Migration Suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify static provisioning on clustered datastore
EBS CSI Migration Suite [sig-storage] Volume Provisioning on Datastore [Feature:vsphere] verify dynamically provisioned pv using storageclass fails on an invalid datastore
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if no zones are specified in the storage class (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if only datastore is specified in the storage class (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if only storage policy is specified in the storage class (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation fails if the availability zone specified in the storage class have no shared datastores under it.
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with an invalid VSAN capability along with a compatible zone combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with compatible policy and datastore without any zones specified in the storage class fails (No shared datastores exist among all the nodes)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible datastore and zone combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible storage policy along with compatible zone and datastore combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible storagePolicy and zone combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with incompatible zone along with compatible storagePolicy and datastore combination specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify PVC creation with invalid zone specified in storage class fails
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a PVC creation fails when multiple zones are specified in the storage class without shared datastores among the zones in waitForFirstConsumer binding mode
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod fails to get scheduled when conflicting volume topology (allowedTopologies) and pod scheduling constraints(nodeSelector) are specified
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode with allowedTopologies
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode with multiple allowedTopologies
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on a VSAN capability, datastore and compatible zone specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on allowed zones specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on multiple zones specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on multiple zones specified in the storage class. (No shared datastores exist among both zones)
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and datastore specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and datastore specified in storage class when there are multiple datastores with the same name under different zones across datacenters
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and storage policy specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones specified in storage class when the datastore under the zone is present in another datacenter
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created and attached to a dynamically created PV, based on the allowed zones, datastore and storage policy specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify a pod is created on a non-Workspace zone and attached to a dynamically created PV, based on the allowed zones and storage policy specified in storage class
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify dynamically created pv with allowed zones specified in storage class, shows the right zone information on its labels
EBS CSI Migration Suite [sig-storage] Zone Support [Feature:vsphere] Verify dynamically created pv with multiple zones specified in the storage class, shows both the zones on its labels
EBS CSI Migration Suite [sig-storage] vcp at scale [Feature:vsphere] vsphere scale tests
EBS CSI Migration Suite [sig-storage] vcp-performance [Feature:vsphere] vcp performance tests
EBS CSI Migration Suite [sig-storage] vsphere cloud provider stress [Feature:vsphere] vsphere stress tests
EBS CSI Migration Suite [sig-storage] vsphere statefulset [Feature:vsphere] vsphere statefulset testing
... skipping 296 lines ... ## Validating cluster test-cluster-25979.k8s.local # Using cluster from kubectl context: test-cluster-25979.k8s.local Validating cluster test-cluster-25979.k8s.local W0602 21:06:21.582052 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:06:31.622132 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:06:41.657277 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:06:51.697971 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:07:01.733099 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:07:12.940390 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:07:24.128848 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:07:34.163020 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:07:45.350397 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-25979-k8-v80p4g-1402702103.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0602 21:08:07.020224 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0602 21:08:28.702462 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0602 21:08:50.321685 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0602 21:09:11.926662 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0602 21:09:33.604331 6198 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b nodes-us-west-2c Node c5.large 1 1 us-west-2c ... skipping 6 lines ... KIND NAME MESSAGE Machine i-07d6e10f507edd160 machine "i-07d6e10f507edd160" has not yet joined cluster Machine i-0b0929c1c7c02115b machine "i-0b0929c1c7c02115b" has not yet joined cluster Machine i-0f85fc245dc63538a machine "i-0f85fc245dc63538a" has not yet joined cluster Node ip-172-20-46-6.us-west-2.compute.internal node "ip-172-20-46-6.us-west-2.compute.internal" of role "master" is not ready Validation Failed W0602 21:09:48.680776 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 10 lines ... Machine i-0f85fc245dc63538a machine "i-0f85fc245dc63538a" has not yet joined cluster Node ip-172-20-46-6.us-west-2.compute.internal node "ip-172-20-46-6.us-west-2.compute.internal" of role "master" is not ready Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Pod kube-system/dns-controller-5d59c585d8-mq42t system-cluster-critical pod "dns-controller-5d59c585d8-mq42t" is pending Validation Failed W0602 21:10:00.859221 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Pod kube-system/etcd-manager-main-ip-172-20-46-6.us-west-2.compute.internal system-cluster-critical pod "etcd-manager-main-ip-172-20-46-6.us-west-2.compute.internal" is pending Validation Failed W0602 21:10:13.033185 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 10 lines ... Machine i-0f85fc245dc63538a machine "i-0f85fc245dc63538a" has not yet joined cluster Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Validation Failed W0602 21:10:25.388341 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Pod kube-system/etcd-manager-events-ip-172-20-46-6.us-west-2.compute.internal system-cluster-critical pod "etcd-manager-events-ip-172-20-46-6.us-west-2.compute.internal" is pending Validation Failed W0602 21:10:37.619699 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-46-6.us-west-2.compute.internal master "ip-172-20-46-6.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Pod kube-system/kube-scheduler-ip-172-20-46-6.us-west-2.compute.internal system-cluster-critical pod "kube-scheduler-ip-172-20-46-6.us-west-2.compute.internal" is pending Validation Failed W0602 21:10:49.934629 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 8 lines ... Machine i-07d6e10f507edd160 machine "i-07d6e10f507edd160" has not yet joined cluster Machine i-0b0929c1c7c02115b machine "i-0b0929c1c7c02115b" has not yet joined cluster Machine i-0f85fc245dc63538a machine "i-0f85fc245dc63538a" has not yet joined cluster Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Validation Failed W0602 21:11:02.269155 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-102-107.us-west-2.compute.internal node "ip-172-20-102-107.us-west-2.compute.internal" of role "node" is not ready Node ip-172-20-53-92.us-west-2.compute.internal node "ip-172-20-53-92.us-west-2.compute.internal" of role "node" is not ready Node ip-172-20-79-140.us-west-2.compute.internal node "ip-172-20-79-140.us-west-2.compute.internal" of role "node" is not ready Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Validation Failed W0602 21:11:14.344152 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 11 lines ... Node ip-172-20-102-107.us-west-2.compute.internal node "ip-172-20-102-107.us-west-2.compute.internal" of role "node" is not ready Node ip-172-20-53-92.us-west-2.compute.internal node "ip-172-20-53-92.us-west-2.compute.internal" of role "node" is not ready Node ip-172-20-79-140.us-west-2.compute.internal node "ip-172-20-79-140.us-west-2.compute.internal" of role "node" is not ready Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Validation Failed W0602 21:11:26.756578 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 8 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Validation Failed W0602 21:11:39.030157 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 10 lines ... KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-nw9g6 system-cluster-critical pod "coredns-8f5559c9b-nw9g6" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-qg8lr system-cluster-critical pod "coredns-autoscaler-6f594f4c58-qg8lr" is pending Pod kube-system/kube-proxy-ip-172-20-53-92.us-west-2.compute.internal system-node-critical pod "kube-proxy-ip-172-20-53-92.us-west-2.compute.internal" is pending Pod kube-system/kube-proxy-ip-172-20-79-140.us-west-2.compute.internal system-node-critical pod "kube-proxy-ip-172-20-79-140.us-west-2.compute.internal" is pending Validation Failed W0602 21:11:51.508806 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 7 lines ... ip-172-20-79-140.us-west-2.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-fmncr system-cluster-critical pod "coredns-8f5559c9b-fmncr" is not ready (coredns) Validation Failed W0602 21:12:03.860676 6198 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a nodes-us-west-2b Node c5.large 1 1 us-west-2b ... skipping 330 lines ... Jun 2 21:14:51.038: INFO: Pod aws-client still exists Jun 2 21:14:52.975: INFO: Waiting for pod aws-client to disappear Jun 2 21:14:53.038: INFO: Pod aws-client still exists Jun 2 21:14:54.974: INFO: Waiting for pod aws-client to disappear Jun 2 21:14:55.037: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 2 21:14:55.236: INFO: Couldn't delete PD "aws://us-west-2a/vol-0e8bdcf359e0a1ac8", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0e8bdcf359e0a1ac8 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 44dd2d5c-0ba4-49d1-b8d7-d8af975e0119 Jun 2 21:15:00.713: INFO: Successfully deleted PD "aws://us-west-2a/vol-0e8bdcf359e0a1ac8". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:15:00.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-4896" for this suite. ... skipping 84 lines ... Jun 2 21:14:40.439: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-324z6nm6 [1mSTEP[0m: creating a claim Jun 2 21:14:40.507: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-j6r5 [1mSTEP[0m: Creating a pod to test multi_subpath Jun 2 21:14:40.711: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-j6r5" in namespace "provisioning-324" to be "Succeeded or Failed" Jun 2 21:14:40.774: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 63.693373ms Jun 2 21:14:42.840: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129247324s Jun 2 21:14:44.906: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194867027s Jun 2 21:14:46.971: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259915773s Jun 2 21:14:49.037: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.325958851s Jun 2 21:14:51.102: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390883287s Jun 2 21:14:53.167: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.456297999s Jun 2 21:14:55.232: INFO: Pod "pod-subpath-test-dynamicpv-j6r5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.52117169s [1mSTEP[0m: Saw pod success Jun 2 21:14:55.232: INFO: Pod "pod-subpath-test-dynamicpv-j6r5" satisfied condition "Succeeded or Failed" Jun 2 21:14:55.296: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-j6r5 container test-container-subpath-dynamicpv-j6r5: <nil> [1mSTEP[0m: delete the pod Jun 2 21:14:55.440: INFO: Waiting for pod pod-subpath-test-dynamicpv-j6r5 to disappear Jun 2 21:14:55.504: INFO: Pod pod-subpath-test-dynamicpv-j6r5 no longer exists [1mSTEP[0m: Deleting pod Jun 2 21:14:55.504: INFO: Deleting pod "pod-subpath-test-dynamicpv-j6r5" in namespace "provisioning-324" ... skipping 32 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDistro debian doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 148 lines ... Jun 2 21:14:54.297: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-8063f9zzp [1mSTEP[0m: creating a claim Jun 2 21:14:54.365: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-lqkb [1mSTEP[0m: Creating a pod to test subpath Jun 2 21:14:54.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lqkb" in namespace "provisioning-8063" to be "Succeeded or Failed" Jun 2 21:14:54.648: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 67.008027ms Jun 2 21:14:56.715: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134230315s Jun 2 21:14:58.784: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203151357s Jun 2 21:15:00.853: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271446959s Jun 2 21:15:02.921: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340153014s Jun 2 21:15:04.994: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.413368887s Jun 2 21:15:07.062: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.480922235s Jun 2 21:15:09.130: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.549352806s Jun 2 21:15:11.200: INFO: Pod "pod-subpath-test-dynamicpv-lqkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.618649199s [1mSTEP[0m: Saw pod success Jun 2 21:15:11.200: INFO: Pod "pod-subpath-test-dynamicpv-lqkb" satisfied condition "Succeeded or Failed" Jun 2 21:15:11.272: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-lqkb container test-container-subpath-dynamicpv-lqkb: <nil> [1mSTEP[0m: delete the pod Jun 2 21:15:11.421: INFO: Waiting for pod pod-subpath-test-dynamicpv-lqkb to disappear Jun 2 21:15:11.488: INFO: Pod pod-subpath-test-dynamicpv-lqkb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-lqkb Jun 2 21:15:11.488: INFO: Deleting pod "pod-subpath-test-dynamicpv-lqkb" in namespace "provisioning-8063" ... skipping 41 lines ... Jun 2 21:15:01.695: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 2 21:15:02.022: INFO: Successfully created a new PD: "aws://us-west-2a/vol-0c3ce7fd6b1c777cd". Jun 2 21:15:02.022: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-kzk5 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:15:02.093: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-kzk5" in namespace "volume-8226" to be "Succeeded or Failed" Jun 2 21:15:02.166: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 72.861666ms Jun 2 21:15:04.229: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135682686s Jun 2 21:15:06.294: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200948783s Jun 2 21:15:08.358: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264580751s Jun 2 21:15:10.421: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327672866s Jun 2 21:15:12.483: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.3902673s Jun 2 21:15:14.547: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.453821774s Jun 2 21:15:16.611: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.518100884s Jun 2 21:15:18.675: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.582444831s Jun 2 21:15:20.745: INFO: Pod "exec-volume-test-inlinevolume-kzk5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.652459771s [1mSTEP[0m: Saw pod success Jun 2 21:15:20.746: INFO: Pod "exec-volume-test-inlinevolume-kzk5" satisfied condition "Succeeded or Failed" Jun 2 21:15:20.810: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-inlinevolume-kzk5 container exec-container-inlinevolume-kzk5: <nil> [1mSTEP[0m: delete the pod Jun 2 21:15:20.948: INFO: Waiting for pod exec-volume-test-inlinevolume-kzk5 to disappear Jun 2 21:15:21.010: INFO: Pod exec-volume-test-inlinevolume-kzk5 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-kzk5 Jun 2 21:15:21.010: INFO: Deleting pod "exec-volume-test-inlinevolume-kzk5" in namespace "volume-8226" Jun 2 21:15:21.262: INFO: Couldn't delete PD "aws://us-west-2a/vol-0c3ce7fd6b1c777cd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c3ce7fd6b1c777cd is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 3c5ec5fd-c8fd-42e9-b0f0-4cd807ee2f9f Jun 2 21:15:26.644: INFO: Couldn't delete PD "aws://us-west-2a/vol-0c3ce7fd6b1c777cd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c3ce7fd6b1c777cd is currently attached to i-0b0929c1c7c02115b status code: 400, request id: f85363b2-f09d-4265-a530-01c9c4fff8db Jun 2 21:15:32.062: INFO: Successfully deleted PD "aws://us-west-2a/vol-0c3ce7fd6b1c777cd". [AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:15:32.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-8226" for this suite. ... skipping 49 lines ... Jun 2 21:15:16.803: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 2 21:15:17.368: INFO: Successfully created a new PD: "aws://us-west-2a/vol-07f5723ccbdbb2339". Jun 2 21:15:17.368: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-tq5z [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:15:17.441: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-tq5z" in namespace "volume-5234" to be "Succeeded or Failed" Jun 2 21:15:17.519: INFO: Pod "exec-volume-test-inlinevolume-tq5z": Phase="Pending", Reason="", readiness=false. Elapsed: 77.478967ms Jun 2 21:15:19.588: INFO: Pod "exec-volume-test-inlinevolume-tq5z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146959879s Jun 2 21:15:21.658: INFO: Pod "exec-volume-test-inlinevolume-tq5z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217291116s Jun 2 21:15:23.728: INFO: Pod "exec-volume-test-inlinevolume-tq5z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286769316s Jun 2 21:15:25.797: INFO: Pod "exec-volume-test-inlinevolume-tq5z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.355611054s Jun 2 21:15:27.866: INFO: Pod "exec-volume-test-inlinevolume-tq5z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.424753445s [1mSTEP[0m: Saw pod success Jun 2 21:15:27.866: INFO: Pod "exec-volume-test-inlinevolume-tq5z" satisfied condition "Succeeded or Failed" Jun 2 21:15:27.934: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-inlinevolume-tq5z container exec-container-inlinevolume-tq5z: <nil> [1mSTEP[0m: delete the pod Jun 2 21:15:28.082: INFO: Waiting for pod exec-volume-test-inlinevolume-tq5z to disappear Jun 2 21:15:28.149: INFO: Pod exec-volume-test-inlinevolume-tq5z no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-tq5z Jun 2 21:15:28.149: INFO: Deleting pod "exec-volume-test-inlinevolume-tq5z" in namespace "volume-5234" Jun 2 21:15:28.356: INFO: Couldn't delete PD "aws://us-west-2a/vol-07f5723ccbdbb2339", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-07f5723ccbdbb2339 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: c999eb79-1eaf-4cd7-a483-3039a08928df Jun 2 21:15:33.738: INFO: Couldn't delete PD "aws://us-west-2a/vol-07f5723ccbdbb2339", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-07f5723ccbdbb2339 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 4bf02d43-99fc-4d06-a5fa-6e5d090c8664 Jun 2 21:15:39.159: INFO: Successfully deleted PD "aws://us-west-2a/vol-07f5723ccbdbb2339". [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:15:39.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-5234" for this suite. ... skipping 30 lines ... Jun 2 21:15:11.546: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-2366tgtrm [1mSTEP[0m: creating a claim Jun 2 21:15:11.610: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-s9zc [1mSTEP[0m: Creating a pod to test subpath Jun 2 21:15:11.810: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-s9zc" in namespace "provisioning-2366" to be "Succeeded or Failed" Jun 2 21:15:11.879: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 68.672044ms Jun 2 21:15:13.943: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133656065s Jun 2 21:15:16.011: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200824233s Jun 2 21:15:18.075: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264852851s Jun 2 21:15:20.140: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330566823s Jun 2 21:15:22.208: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.397765772s Jun 2 21:15:24.273: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.462782996s Jun 2 21:15:26.338: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.527673169s Jun 2 21:15:28.404: INFO: Pod "pod-subpath-test-dynamicpv-s9zc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.593991748s [1mSTEP[0m: Saw pod success Jun 2 21:15:28.404: INFO: Pod "pod-subpath-test-dynamicpv-s9zc" satisfied condition "Succeeded or Failed" Jun 2 21:15:28.468: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-s9zc container test-container-subpath-dynamicpv-s9zc: <nil> [1mSTEP[0m: delete the pod Jun 2 21:15:28.607: INFO: Waiting for pod pod-subpath-test-dynamicpv-s9zc to disappear Jun 2 21:15:28.672: INFO: Pod pod-subpath-test-dynamicpv-s9zc no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-s9zc Jun 2 21:15:28.672: INFO: Deleting pod "pod-subpath-test-dynamicpv-s9zc" in namespace "provisioning-2366" ... skipping 136 lines ... Jun 2 21:15:37.192: INFO: PersistentVolumeClaim pvc-lmpsr found but phase is Pending instead of Bound. Jun 2 21:15:39.255: INFO: PersistentVolumeClaim pvc-lmpsr found and phase=Bound (6.25488323s) Jun 2 21:15:39.255: INFO: Waiting up to 3m0s for PersistentVolume aws-j94x9 to have phase Bound Jun 2 21:15:39.317: INFO: PersistentVolume aws-j94x9 found and phase=Bound (62.09532ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-wj9v [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:15:39.512: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-wj9v" in namespace "volume-6107" to be "Succeeded or Failed" Jun 2 21:15:39.574: INFO: Pod "exec-volume-test-preprovisionedpv-wj9v": Phase="Pending", Reason="", readiness=false. Elapsed: 61.990504ms Jun 2 21:15:41.638: INFO: Pod "exec-volume-test-preprovisionedpv-wj9v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125346416s Jun 2 21:15:43.717: INFO: Pod "exec-volume-test-preprovisionedpv-wj9v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204234602s Jun 2 21:15:45.779: INFO: Pod "exec-volume-test-preprovisionedpv-wj9v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.266668262s [1mSTEP[0m: Saw pod success Jun 2 21:15:45.779: INFO: Pod "exec-volume-test-preprovisionedpv-wj9v" satisfied condition "Succeeded or Failed" Jun 2 21:15:45.841: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-wj9v container exec-container-preprovisionedpv-wj9v: <nil> [1mSTEP[0m: delete the pod Jun 2 21:15:45.972: INFO: Waiting for pod exec-volume-test-preprovisionedpv-wj9v to disappear Jun 2 21:15:46.033: INFO: Pod exec-volume-test-preprovisionedpv-wj9v no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-wj9v Jun 2 21:15:46.033: INFO: Deleting pod "exec-volume-test-preprovisionedpv-wj9v" in namespace "volume-6107" [1mSTEP[0m: Deleting pv and pvc Jun 2 21:15:46.095: INFO: Deleting PersistentVolumeClaim "pvc-lmpsr" Jun 2 21:15:46.158: INFO: Deleting PersistentVolume "aws-j94x9" Jun 2 21:15:46.887: INFO: Couldn't delete PD "aws://us-west-2a/vol-0504f64f3619bd080", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0504f64f3619bd080 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 62922c9f-db76-4c08-ac17-bd7cb6bd7731 Jun 2 21:15:52.300: INFO: Couldn't delete PD "aws://us-west-2a/vol-0504f64f3619bd080", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0504f64f3619bd080 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: c095c8e1-eb0e-4d4a-84a8-1acada48a941 Jun 2 21:15:57.687: INFO: Successfully deleted PD "aws://us-west-2a/vol-0504f64f3619bd080". [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:15:57.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-6107" for this suite. ... skipping 63 lines ... Jun 2 21:15:53.085: INFO: PersistentVolumeClaim pvc-qm5dz found but phase is Pending instead of Bound. Jun 2 21:15:55.151: INFO: PersistentVolumeClaim pvc-qm5dz found and phase=Bound (14.588593014s) Jun 2 21:15:55.151: INFO: Waiting up to 3m0s for PersistentVolume aws-nw5c7 to have phase Bound Jun 2 21:15:55.214: INFO: PersistentVolume aws-nw5c7 found and phase=Bound (63.757299ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-bdmq [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:15:55.411: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-bdmq" in namespace "volume-577" to be "Succeeded or Failed" Jun 2 21:15:55.490: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq": Phase="Pending", Reason="", readiness=false. Elapsed: 78.940605ms Jun 2 21:15:57.558: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146576197s Jun 2 21:15:59.623: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212368819s Jun 2 21:16:01.690: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278886789s Jun 2 21:16:03.755: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343675138s Jun 2 21:16:05.820: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409070905s [1mSTEP[0m: Saw pod success Jun 2 21:16:05.820: INFO: Pod "exec-volume-test-preprovisionedpv-bdmq" satisfied condition "Succeeded or Failed" Jun 2 21:16:05.885: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-bdmq container exec-container-preprovisionedpv-bdmq: <nil> [1mSTEP[0m: delete the pod Jun 2 21:16:06.021: INFO: Waiting for pod exec-volume-test-preprovisionedpv-bdmq to disappear Jun 2 21:16:06.087: INFO: Pod exec-volume-test-preprovisionedpv-bdmq no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-bdmq Jun 2 21:16:06.087: INFO: Deleting pod "exec-volume-test-preprovisionedpv-bdmq" in namespace "volume-577" [1mSTEP[0m: Deleting pv and pvc Jun 2 21:16:06.151: INFO: Deleting PersistentVolumeClaim "pvc-qm5dz" Jun 2 21:16:06.219: INFO: Deleting PersistentVolume "aws-nw5c7" Jun 2 21:16:06.443: INFO: Couldn't delete PD "aws://us-west-2a/vol-083105dd3ddab7e84", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-083105dd3ddab7e84 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 3fa9074c-22a7-4d27-a6ea-a73b7d1c515b Jun 2 21:16:11.807: INFO: Couldn't delete PD "aws://us-west-2a/vol-083105dd3ddab7e84", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-083105dd3ddab7e84 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 602d86d9-9658-435e-ab6e-bd00bc1e19c2 Jun 2 21:16:17.254: INFO: Successfully deleted PD "aws://us-west-2a/vol-083105dd3ddab7e84". [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:16:17.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-577" for this suite. ... skipping 7 lines ... [Testpattern: Pre-provisioned PV (ext4)] volumes [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:15:57.843: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 2 21:15:58.154: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 2 21:15:58.430: INFO: Successfully created a new PD: "aws://us-west-2a/vol-0904eb6bfdc46b5a6". Jun 2 21:15:58.430: INFO: Creating resource for pre-provisioned PV Jun 2 21:15:58.430: INFO: Creating PVC and PV ... skipping 6 lines ... Jun 2 21:16:04.911: INFO: PersistentVolumeClaim pvc-bcf96 found but phase is Pending instead of Bound. Jun 2 21:16:07.003: INFO: PersistentVolumeClaim pvc-bcf96 found but phase is Pending instead of Bound. Jun 2 21:16:09.066: INFO: PersistentVolumeClaim pvc-bcf96 found and phase=Bound (10.413584429s) Jun 2 21:16:09.066: INFO: Waiting up to 3m0s for PersistentVolume aws-bp842 to have phase Bound Jun 2 21:16:09.128: INFO: PersistentVolume aws-bp842 found and phase=Bound (62.510492ms) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 2 21:16:11.513: INFO: Deleting pod "pod-5b5235c4-c0ce-4f62-a319-661d00dedbe8" in namespace "volumemode-5967" Jun 2 21:16:11.576: INFO: Wait up to 5m0s for pod "pod-5b5235c4-c0ce-4f62-a319-661d00dedbe8" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Jun 2 21:16:23.702: INFO: Deleting PersistentVolumeClaim "pvc-bcf96" Jun 2 21:16:23.765: INFO: Deleting PersistentVolume "aws-bp842" Jun 2 21:16:24.019: INFO: Successfully deleted PD "aws://us-west-2a/vol-0904eb6bfdc46b5a6". ... skipping 7 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:16:24.150: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 2 21:16:24.459: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 2 21:16:24.459: INFO: Creating resource for dynamic PV Jun 2 21:16:24.459: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-2904x46w8 [1mSTEP[0m: creating a claim [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 2 21:16:30.920: INFO: Deleting pod "pod-f5862356-72e6-4bd8-a5be-d027ed8605f2" in namespace "volumemode-2904" Jun 2 21:16:30.992: INFO: Wait up to 5m0s for pod "pod-f5862356-72e6-4bd8-a5be-d027ed8605f2" to be fully deleted [1mSTEP[0m: Deleting pvc Jun 2 21:16:35.243: INFO: Deleting PersistentVolumeClaim "awsmk6sq" Jun 2 21:16:35.307: INFO: Waiting up to 5m0s for PersistentVolume pvc-a054d822-5767-4401-bade-51138e1e7efc to get deleted Jun 2 21:16:35.369: INFO: PersistentVolume pvc-a054d822-5767-4401-bade-51138e1e7efc found and phase=Released (61.873352ms) ... skipping 9 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 Jun 2 21:16:40.626: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath ... skipping 58 lines ... Jun 2 21:16:17.720: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-4339kqxp [1mSTEP[0m: creating a claim Jun 2 21:16:17.784: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-v896 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:16:17.981: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-v896" in namespace "volume-433" to be "Succeeded or Failed" Jun 2 21:16:18.046: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 64.537809ms Jun 2 21:16:20.111: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129439302s Jun 2 21:16:22.176: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194968481s Jun 2 21:16:24.241: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25951792s Jun 2 21:16:26.306: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324895775s Jun 2 21:16:28.371: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389429398s Jun 2 21:16:30.435: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 12.453868626s Jun 2 21:16:32.501: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 14.519602433s Jun 2 21:16:34.566: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 16.584422397s Jun 2 21:16:36.632: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 18.650843068s Jun 2 21:16:38.697: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Pending", Reason="", readiness=false. Elapsed: 20.71581892s Jun 2 21:16:40.788: INFO: Pod "exec-volume-test-dynamicpv-v896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.807065636s [1mSTEP[0m: Saw pod success Jun 2 21:16:40.788: INFO: Pod "exec-volume-test-dynamicpv-v896" satisfied condition "Succeeded or Failed" Jun 2 21:16:40.852: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-dynamicpv-v896 container exec-container-dynamicpv-v896: <nil> [1mSTEP[0m: delete the pod Jun 2 21:16:40.989: INFO: Waiting for pod exec-volume-test-dynamicpv-v896 to disappear Jun 2 21:16:41.053: INFO: Pod exec-volume-test-dynamicpv-v896 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-v896 Jun 2 21:16:41.053: INFO: Deleting pod "exec-volume-test-dynamicpv-v896" in namespace "volume-433" ... skipping 1025 lines ... Jun 2 21:18:44.031: INFO: Waiting for pod aws-client to disappear Jun 2 21:18:44.095: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws [1mSTEP[0m: Deleting pv and pvc Jun 2 21:18:44.095: INFO: Deleting PersistentVolumeClaim "pvc-s9z6x" Jun 2 21:18:44.188: INFO: Deleting PersistentVolume "aws-mz8hg" Jun 2 21:18:44.404: INFO: Couldn't delete PD "aws://us-west-2a/vol-00e701d919b05a5bb", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-00e701d919b05a5bb is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 8b3cb497-0dff-4e4a-809c-52c0a52e7737 Jun 2 21:18:49.840: INFO: Successfully deleted PD "aws://us-west-2a/vol-00e701d919b05a5bb". [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:18:49.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-4111" for this suite. ... skipping 133 lines ... Jun 2 21:18:38.796: INFO: PersistentVolumeClaim pvc-mj7pc found but phase is Pending instead of Bound. Jun 2 21:18:40.864: INFO: PersistentVolumeClaim pvc-mj7pc found and phase=Bound (8.353031084s) Jun 2 21:18:40.864: INFO: Waiting up to 3m0s for PersistentVolume aws-q49sf to have phase Bound Jun 2 21:18:40.931: INFO: PersistentVolume aws-q49sf found and phase=Bound (66.686402ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-2wp7 [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:18:41.133: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2wp7" in namespace "volume-4423" to be "Succeeded or Failed" Jun 2 21:18:41.200: INFO: Pod "exec-volume-test-preprovisionedpv-2wp7": Phase="Pending", Reason="", readiness=false. Elapsed: 67.069796ms Jun 2 21:18:43.269: INFO: Pod "exec-volume-test-preprovisionedpv-2wp7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135513058s Jun 2 21:18:45.337: INFO: Pod "exec-volume-test-preprovisionedpv-2wp7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203887948s Jun 2 21:18:47.407: INFO: Pod "exec-volume-test-preprovisionedpv-2wp7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273229872s [1mSTEP[0m: Saw pod success Jun 2 21:18:47.407: INFO: Pod "exec-volume-test-preprovisionedpv-2wp7" satisfied condition "Succeeded or Failed" Jun 2 21:18:47.474: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-2wp7 container exec-container-preprovisionedpv-2wp7: <nil> [1mSTEP[0m: delete the pod Jun 2 21:18:47.626: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2wp7 to disappear Jun 2 21:18:47.692: INFO: Pod exec-volume-test-preprovisionedpv-2wp7 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-2wp7 Jun 2 21:18:47.692: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2wp7" in namespace "volume-4423" [1mSTEP[0m: Deleting pv and pvc Jun 2 21:18:47.760: INFO: Deleting PersistentVolumeClaim "pvc-mj7pc" Jun 2 21:18:47.829: INFO: Deleting PersistentVolume "aws-q49sf" Jun 2 21:18:48.057: INFO: Couldn't delete PD "aws://us-west-2a/vol-05b549720202e00f6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05b549720202e00f6 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: a04fe51c-7824-41d1-8152-d7671b44e07a Jun 2 21:18:53.459: INFO: Couldn't delete PD "aws://us-west-2a/vol-05b549720202e00f6", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05b549720202e00f6 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 2dbf2094-d85c-4230-b556-e0446c1630d5 Jun 2 21:18:58.872: INFO: Successfully deleted PD "aws://us-west-2a/vol-05b549720202e00f6". [AfterEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:18:58.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-4423" for this suite. ... skipping 123 lines ... Jun 2 21:19:13.481: INFO: Waiting for pod aws-client to disappear Jun 2 21:19:13.549: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws [1mSTEP[0m: Deleting pv and pvc Jun 2 21:19:13.549: INFO: Deleting PersistentVolumeClaim "pvc-vs4bz" Jun 2 21:19:13.619: INFO: Deleting PersistentVolume "aws-jtwf4" Jun 2 21:19:13.841: INFO: Couldn't delete PD "aws://us-west-2a/vol-0ef8f22bddb2f3cbc", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ef8f22bddb2f3cbc is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 2432169d-7775-43b8-8264-b25fa362a3f1 Jun 2 21:19:19.270: INFO: Successfully deleted PD "aws://us-west-2a/vol-0ef8f22bddb2f3cbc". [AfterEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:19:19.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-7397" for this suite. ... skipping 102 lines ... Jun 2 21:19:19.539: INFO: Pod aws-client still exists Jun 2 21:19:21.477: INFO: Waiting for pod aws-client to disappear Jun 2 21:19:21.541: INFO: Pod aws-client still exists Jun 2 21:19:23.478: INFO: Waiting for pod aws-client to disappear Jun 2 21:19:23.539: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 2 21:19:23.687: INFO: Couldn't delete PD "aws://us-west-2a/vol-06773e95bed93a393", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-06773e95bed93a393 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: f8ef1fea-74f0-4502-99b4-01e9e3c0a351 Jun 2 21:19:29.098: INFO: Couldn't delete PD "aws://us-west-2a/vol-06773e95bed93a393", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-06773e95bed93a393 is currently attached to i-0b0929c1c7c02115b status code: 400, request id: fe0bb93c-fea5-436c-a7f7-6808c6ab0530 Jun 2 21:19:34.554: INFO: Successfully deleted PD "aws://us-west-2a/vol-06773e95bed93a393". [AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:19:34.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-5241" for this suite. ... skipping 268 lines ... /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:19:34.695: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278 Jun 2 21:19:35.020: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 2 21:19:35.020: INFO: Creating resource for dynamic PV Jun 2 21:19:35.020: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-1352jcsrg [1mSTEP[0m: creating a claim Jun 2 21:19:35.083: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xdps [1mSTEP[0m: Checking for subpath error in container status Jun 2 21:19:45.401: INFO: Deleting pod "pod-subpath-test-dynamicpv-xdps" in namespace "provisioning-1352" Jun 2 21:19:45.466: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-xdps" to be fully deleted [1mSTEP[0m: Deleting pod Jun 2 21:19:53.592: INFO: Deleting pod "pod-subpath-test-dynamicpv-xdps" in namespace "provisioning-1352" [1mSTEP[0m: Deleting pvc Jun 2 21:19:53.777: INFO: Deleting PersistentVolumeClaim "awsr6rxw" ... skipping 12 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 ... skipping 14 lines ... [36mDriver aws doesn't support CSIInlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Pre-provisioned PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:19:44.589: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 2 21:19:44.911: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 2 21:19:45.207: INFO: Successfully created a new PD: "aws://us-west-2a/vol-02236e139b7bd210c". Jun 2 21:19:45.207: INFO: Creating resource for pre-provisioned PV Jun 2 21:19:45.207: INFO: Creating PVC and PV ... skipping 5 lines ... Jun 2 21:19:49.640: INFO: PersistentVolumeClaim pvc-ksvwk found but phase is Pending instead of Bound. Jun 2 21:19:51.705: INFO: PersistentVolumeClaim pvc-ksvwk found but phase is Pending instead of Bound. Jun 2 21:19:53.770: INFO: PersistentVolumeClaim pvc-ksvwk found and phase=Bound (8.338750029s) Jun 2 21:19:53.771: INFO: Waiting up to 3m0s for PersistentVolume aws-lv4fs to have phase Bound Jun 2 21:19:53.835: INFO: PersistentVolume aws-lv4fs found and phase=Bound (64.175961ms) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 2 21:19:56.221: INFO: Deleting pod "pod-6515ee35-8246-4bae-a062-303fdf06f836" in namespace "volumemode-2679" Jun 2 21:19:56.287: INFO: Wait up to 5m0s for pod "pod-6515ee35-8246-4bae-a062-303fdf06f836" to be fully deleted [1mSTEP[0m: Deleting pv and pvc Jun 2 21:20:04.415: INFO: Deleting PersistentVolumeClaim "pvc-ksvwk" Jun 2 21:20:04.483: INFO: Deleting PersistentVolume "aws-lv4fs" Jun 2 21:20:04.755: INFO: Successfully deleted PD "aws://us-west-2a/vol-02236e139b7bd210c". ... skipping 7 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 ... skipping 212 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 94 lines ... Jun 2 21:20:13.741: INFO: Waiting for pod aws-client to disappear Jun 2 21:20:13.810: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws [1mSTEP[0m: Deleting pv and pvc Jun 2 21:20:13.810: INFO: Deleting PersistentVolumeClaim "pvc-j98gf" Jun 2 21:20:13.879: INFO: Deleting PersistentVolume "aws-8qhxt" Jun 2 21:20:14.115: INFO: Couldn't delete PD "aws://us-west-2a/vol-08e0e56323398207b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08e0e56323398207b is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 9470a5f9-1c39-48f5-81c6-b12df4671ea8 Jun 2 21:20:19.578: INFO: Successfully deleted PD "aws://us-west-2a/vol-08e0e56323398207b". [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:20:19.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-3351" for this suite. ... skipping 28 lines ... Jun 2 21:20:04.543: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics [1mSTEP[0m: creating a test aws volume Jun 2 21:20:04.824: INFO: Successfully created a new PD: "aws://us-west-2a/vol-026dfd7a243b5555c". Jun 2 21:20:04.824: INFO: Creating resource for inline volume [1mSTEP[0m: Creating pod exec-volume-test-inlinevolume-ktjb [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:20:04.898: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-ktjb" in namespace "volume-2247" to be "Succeeded or Failed" Jun 2 21:20:04.960: INFO: Pod "exec-volume-test-inlinevolume-ktjb": Phase="Pending", Reason="", readiness=false. Elapsed: 61.955416ms Jun 2 21:20:07.025: INFO: Pod "exec-volume-test-inlinevolume-ktjb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126187543s Jun 2 21:20:09.089: INFO: Pod "exec-volume-test-inlinevolume-ktjb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190481072s Jun 2 21:20:11.151: INFO: Pod "exec-volume-test-inlinevolume-ktjb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252705951s Jun 2 21:20:13.213: INFO: Pod "exec-volume-test-inlinevolume-ktjb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314837265s Jun 2 21:20:15.276: INFO: Pod "exec-volume-test-inlinevolume-ktjb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377901491s [1mSTEP[0m: Saw pod success Jun 2 21:20:15.276: INFO: Pod "exec-volume-test-inlinevolume-ktjb" satisfied condition "Succeeded or Failed" Jun 2 21:20:15.339: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-inlinevolume-ktjb container exec-container-inlinevolume-ktjb: <nil> [1mSTEP[0m: delete the pod Jun 2 21:20:15.480: INFO: Waiting for pod exec-volume-test-inlinevolume-ktjb to disappear Jun 2 21:20:15.542: INFO: Pod exec-volume-test-inlinevolume-ktjb no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-inlinevolume-ktjb Jun 2 21:20:15.542: INFO: Deleting pod "exec-volume-test-inlinevolume-ktjb" in namespace "volume-2247" Jun 2 21:20:15.763: INFO: Couldn't delete PD "aws://us-west-2a/vol-026dfd7a243b5555c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-026dfd7a243b5555c is currently attached to i-0b0929c1c7c02115b status code: 400, request id: c5041665-475e-4063-9cdc-e842fc80f7f9 Jun 2 21:20:21.139: INFO: Couldn't delete PD "aws://us-west-2a/vol-026dfd7a243b5555c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-026dfd7a243b5555c is currently attached to i-0b0929c1c7c02115b status code: 400, request id: f0a08e1a-8eae-4ab3-9ffc-0312ed99871d Jun 2 21:20:26.558: INFO: Couldn't delete PD "aws://us-west-2a/vol-026dfd7a243b5555c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-026dfd7a243b5555c is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 5ae44ea1-286d-4766-8a0c-d40069ca6d58 Jun 2 21:20:32.006: INFO: Successfully deleted PD "aws://us-west-2a/vol-026dfd7a243b5555c". [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:20:32.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-2247" for this suite. ... skipping 30 lines ... Jun 2 21:20:06.227: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volume-6542qwrfm [1mSTEP[0m: creating a claim Jun 2 21:20:06.291: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-9qcx [1mSTEP[0m: Creating a pod to test exec-volume-test Jun 2 21:20:06.496: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-9qcx" in namespace "volume-6542" to be "Succeeded or Failed" Jun 2 21:20:06.563: INFO: Pod "exec-volume-test-dynamicpv-9qcx": Phase="Pending", Reason="", readiness=false. Elapsed: 66.438864ms Jun 2 21:20:08.628: INFO: Pod "exec-volume-test-dynamicpv-9qcx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131646501s Jun 2 21:20:10.692: INFO: Pod "exec-volume-test-dynamicpv-9qcx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195678152s Jun 2 21:20:12.756: INFO: Pod "exec-volume-test-dynamicpv-9qcx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259968177s Jun 2 21:20:14.821: INFO: Pod "exec-volume-test-dynamicpv-9qcx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324631743s Jun 2 21:20:16.886: INFO: Pod "exec-volume-test-dynamicpv-9qcx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.389148525s [1mSTEP[0m: Saw pod success Jun 2 21:20:16.886: INFO: Pod "exec-volume-test-dynamicpv-9qcx" satisfied condition "Succeeded or Failed" Jun 2 21:20:16.951: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod exec-volume-test-dynamicpv-9qcx container exec-container-dynamicpv-9qcx: <nil> [1mSTEP[0m: delete the pod Jun 2 21:20:17.130: INFO: Waiting for pod exec-volume-test-dynamicpv-9qcx to disappear Jun 2 21:20:17.193: INFO: Pod exec-volume-test-dynamicpv-9qcx no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-9qcx Jun 2 21:20:17.193: INFO: Deleting pod "exec-volume-test-dynamicpv-9qcx" in namespace "volume-6542" ... skipping 25 lines ... should allow exec of files on the volume [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:20:19.724: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240 Jun 2 21:20:20.065: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 2 21:20:20.065: INFO: Creating resource for dynamic PV Jun 2 21:20:20.065: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-28849kpn7 [1mSTEP[0m: creating a claim Jun 2 21:20:20.135: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xq5g [1mSTEP[0m: Checking for subpath error in container status Jun 2 21:20:34.499: INFO: Deleting pod "pod-subpath-test-dynamicpv-xq5g" in namespace "provisioning-2884" Jun 2 21:20:34.569: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-xq5g" to be fully deleted [1mSTEP[0m: Deleting pod Jun 2 21:20:44.706: INFO: Deleting pod "pod-subpath-test-dynamicpv-xq5g" in namespace "provisioning-2884" [1mSTEP[0m: Deleting pvc Jun 2 21:20:44.913: INFO: Deleting PersistentVolumeClaim "aws845tq" ... skipping 14 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if subpath directory is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:20:32.137: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267 Jun 2 21:20:32.457: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 2 21:20:32.457: INFO: Creating resource for dynamic PV Jun 2 21:20:32.457: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-43147s556 [1mSTEP[0m: creating a claim Jun 2 21:20:32.526: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ptzk [1mSTEP[0m: Checking for subpath error in container status Jun 2 21:20:48.842: INFO: Deleting pod "pod-subpath-test-dynamicpv-ptzk" in namespace "provisioning-4314" Jun 2 21:20:48.906: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-ptzk" to be fully deleted [1mSTEP[0m: Deleting pod Jun 2 21:20:55.030: INFO: Deleting pod "pod-subpath-test-dynamicpv-ptzk" in namespace "provisioning-4314" [1mSTEP[0m: Deleting pvc Jun 2 21:20:55.218: INFO: Deleting PersistentVolumeClaim "awsdjpf5" ... skipping 12 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 ... skipping 179 lines ... Jun 2 21:21:06.539: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-5869l64xb [1mSTEP[0m: creating a claim Jun 2 21:21:06.603: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-2krm [1mSTEP[0m: Creating a pod to test subpath Jun 2 21:21:06.794: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2krm" in namespace "provisioning-5869" to be "Succeeded or Failed" Jun 2 21:21:06.856: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Pending", Reason="", readiness=false. Elapsed: 61.731525ms Jun 2 21:21:08.920: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125253523s Jun 2 21:21:10.983: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188765531s Jun 2 21:21:13.048: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253491409s Jun 2 21:21:15.111: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31653167s Jun 2 21:21:17.174: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.379582645s Jun 2 21:21:19.238: INFO: Pod "pod-subpath-test-dynamicpv-2krm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.44374923s [1mSTEP[0m: Saw pod success Jun 2 21:21:19.238: INFO: Pod "pod-subpath-test-dynamicpv-2krm" satisfied condition "Succeeded or Failed" Jun 2 21:21:19.300: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-2krm container test-container-volume-dynamicpv-2krm: <nil> [1mSTEP[0m: delete the pod Jun 2 21:21:19.434: INFO: Waiting for pod pod-subpath-test-dynamicpv-2krm to disappear Jun 2 21:21:19.496: INFO: Pod pod-subpath-test-dynamicpv-2krm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-2krm Jun 2 21:21:19.496: INFO: Deleting pod "pod-subpath-test-dynamicpv-2krm" in namespace "provisioning-5869" ... skipping 229 lines ... Jun 2 21:21:35.579: INFO: Creating resource for dynamic PV Jun 2 21:21:35.579: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} [1mSTEP[0m: creating a StorageClass volume-expand-6308xgczc [1mSTEP[0m: creating a claim [1mSTEP[0m: Expanding non-expandable pvc Jun 2 21:21:35.770: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} Jun 2 21:21:35.900: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:38.038: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:40.026: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:42.026: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:44.035: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:46.035: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:48.027: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:50.035: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:52.029: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:54.032: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:56.032: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:21:58.034: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:22:00.039: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:22:02.030: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:22:04.033: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:22:06.040: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 5 lines ... }, VolumeName: "", StorageClassName: &"volume-expand-6308xgczc", ... // 2 identical fields } Jun 2 21:22:06.184: INFO: Error updating pvc awsg2m7m: PersistentVolumeClaim "awsg2m7m" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteOnce"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ ... skipping 184 lines ... [36mDriver aws doesn't support ext3 -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121 [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (immediate binding)] topology[0m [1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies[0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:22:21.201: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192 Jun 2 21:22:21.616: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c] Jun 2 21:22:21.616: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a] Jun 2 21:22:21.617: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 2 21:22:21.617: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-9492qg288 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-9492 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-9492qg288,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} [1mSTEP[0m: Creating sc ... skipping 55 lines ... Jun 2 21:22:05.487: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-470767c7f [1mSTEP[0m: creating a claim Jun 2 21:22:05.557: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-bm7t [1mSTEP[0m: Creating a pod to test subpath Jun 2 21:22:05.769: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bm7t" in namespace "provisioning-4707" to be "Succeeded or Failed" Jun 2 21:22:05.839: INFO: Pod "pod-subpath-test-dynamicpv-bm7t": Phase="Pending", Reason="", readiness=false. Elapsed: 69.750404ms Jun 2 21:22:07.909: INFO: Pod "pod-subpath-test-dynamicpv-bm7t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139719595s Jun 2 21:22:09.980: INFO: Pod "pod-subpath-test-dynamicpv-bm7t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211241149s Jun 2 21:22:12.050: INFO: Pod "pod-subpath-test-dynamicpv-bm7t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280622938s Jun 2 21:22:14.120: INFO: Pod "pod-subpath-test-dynamicpv-bm7t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35083046s Jun 2 21:22:16.191: INFO: Pod "pod-subpath-test-dynamicpv-bm7t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421536646s [1mSTEP[0m: Saw pod success Jun 2 21:22:16.191: INFO: Pod "pod-subpath-test-dynamicpv-bm7t" satisfied condition "Succeeded or Failed" Jun 2 21:22:16.260: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-bm7t container test-container-subpath-dynamicpv-bm7t: <nil> [1mSTEP[0m: delete the pod Jun 2 21:22:16.418: INFO: Waiting for pod pod-subpath-test-dynamicpv-bm7t to disappear Jun 2 21:22:16.486: INFO: Pod pod-subpath-test-dynamicpv-bm7t no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-bm7t Jun 2 21:22:16.486: INFO: Deleting pod "pod-subpath-test-dynamicpv-bm7t" in namespace "provisioning-4707" ... skipping 112 lines ... Jun 2 21:22:19.543: INFO: Pod aws-client still exists Jun 2 21:22:21.479: INFO: Waiting for pod aws-client to disappear Jun 2 21:22:21.546: INFO: Pod aws-client still exists Jun 2 21:22:23.479: INFO: Waiting for pod aws-client to disappear Jun 2 21:22:23.544: INFO: Pod aws-client no longer exists [1mSTEP[0m: cleaning the environment after aws Jun 2 21:22:23.691: INFO: Couldn't delete PD "aws://us-west-2a/vol-05f9a1e5807696d3e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05f9a1e5807696d3e is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 3ea79e8b-a60f-48e2-8582-9f5fa1ea4df6 Jun 2 21:22:29.067: INFO: Couldn't delete PD "aws://us-west-2a/vol-05f9a1e5807696d3e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05f9a1e5807696d3e is currently attached to i-0b0929c1c7c02115b status code: 400, request id: 3704a07f-3382-477d-8b73-dfa34c407095 Jun 2 21:22:34.523: INFO: Successfully deleted PD "aws://us-west-2a/vol-05f9a1e5807696d3e". [AfterEach] [Testpattern: Inline-volume (ext4)] volumes /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 2 21:22:34.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volume-1077" for this suite. ... skipping 271 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 82 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDistro debian doesn't support ntfs -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 106 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver supports dynamic provisioning, skipping PreprovisionedPV pattern[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:236 [90m------------------------------[0m ... skipping 18 lines ... Jun 2 21:22:46.055: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-42nnsts [1mSTEP[0m: creating a claim Jun 2 21:22:46.126: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ggrq [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jun 2 21:22:46.344: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ggrq" in namespace "provisioning-42" to be "Succeeded or Failed" Jun 2 21:22:46.427: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Pending", Reason="", readiness=false. Elapsed: 82.519219ms Jun 2 21:22:48.495: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151161516s Jun 2 21:22:50.563: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218920294s Jun 2 21:22:52.631: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286751618s Jun 2 21:22:54.699: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.354851161s Jun 2 21:22:56.769: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Running", Reason="", readiness=true. Elapsed: 10.424450264s ... skipping 5 lines ... Jun 2 21:23:09.186: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Running", Reason="", readiness=true. Elapsed: 22.841837529s Jun 2 21:23:11.253: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Running", Reason="", readiness=true. Elapsed: 24.909295765s Jun 2 21:23:13.322: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Running", Reason="", readiness=true. Elapsed: 26.97778186s Jun 2 21:23:15.391: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Running", Reason="", readiness=true. Elapsed: 29.047038725s Jun 2 21:23:17.460: INFO: Pod "pod-subpath-test-dynamicpv-ggrq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.115375065s [1mSTEP[0m: Saw pod success Jun 2 21:23:17.460: INFO: Pod "pod-subpath-test-dynamicpv-ggrq" satisfied condition "Succeeded or Failed" Jun 2 21:23:17.526: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-ggrq container test-container-subpath-dynamicpv-ggrq: <nil> [1mSTEP[0m: delete the pod Jun 2 21:23:17.672: INFO: Waiting for pod pod-subpath-test-dynamicpv-ggrq to disappear Jun 2 21:23:17.739: INFO: Pod pod-subpath-test-dynamicpv-ggrq no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-ggrq Jun 2 21:23:17.739: INFO: Deleting pod "pod-subpath-test-dynamicpv-ggrq" in namespace "provisioning-42" ... skipping 205 lines ... [Testpattern: Dynamic PV (block volmode)] volumes [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should store data [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [90m------------------------------[0m [0m[ebs-csi-migration] EBS CSI Migration[0m [90m[Driver: aws][0m [0m[Testpattern: Dynamic PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client Jun 2 21:23:53.996: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-25979.k8s.local.kops.kubeconfig [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297 Jun 2 21:23:54.341: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics Jun 2 21:23:54.341: INFO: Creating resource for dynamic PV Jun 2 21:23:54.341: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-2734mj6km [1mSTEP[0m: creating a claim [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jun 2 21:24:00.833: INFO: Deleting pod "pod-1c7ef99a-0584-4a50-a11f-dbec5609f7d6" in namespace "volumemode-2734" Jun 2 21:24:00.907: INFO: Wait up to 5m0s for pod "pod-1c7ef99a-0584-4a50-a11f-dbec5609f7d6" to be fully deleted [1mSTEP[0m: Deleting pvc Jun 2 21:24:05.184: INFO: Deleting PersistentVolumeClaim "awsxz5q4" Jun 2 21:24:05.257: INFO: Waiting up to 5m0s for PersistentVolume pvc-f0c6ae0f-d6ab-49f3-88a7-abdaa525c82f to get deleted Jun 2 21:24:05.326: INFO: PersistentVolume pvc-f0c6ae0f-d6ab-49f3-88a7-abdaa525c82f found and phase=Released (68.91592ms) ... skipping 9 lines ... [ebs-csi-migration] EBS CSI Migration [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85[0m [Driver: aws] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297[0m [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 ... skipping 91 lines ... Jun 2 21:23:34.474: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-62476zq5w [1mSTEP[0m: creating a claim Jun 2 21:23:34.543: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-nzwz [1mSTEP[0m: Creating a pod to test subpath Jun 2 21:23:34.750: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nzwz" in namespace "provisioning-6247" to be "Succeeded or Failed" Jun 2 21:23:34.817: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 67.007999ms Jun 2 21:23:36.885: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134357459s Jun 2 21:23:38.952: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201678175s Jun 2 21:23:41.021: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270378857s Jun 2 21:23:43.091: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340295651s Jun 2 21:23:45.158: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.407912621s Jun 2 21:23:47.228: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.477618994s [1mSTEP[0m: Saw pod success Jun 2 21:23:47.228: INFO: Pod "pod-subpath-test-dynamicpv-nzwz" satisfied condition "Succeeded or Failed" Jun 2 21:23:47.300: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-nzwz container test-container-subpath-dynamicpv-nzwz: <nil> [1mSTEP[0m: delete the pod Jun 2 21:23:47.447: INFO: Waiting for pod pod-subpath-test-dynamicpv-nzwz to disappear Jun 2 21:23:47.514: INFO: Pod pod-subpath-test-dynamicpv-nzwz no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-nzwz Jun 2 21:23:47.514: INFO: Deleting pod "pod-subpath-test-dynamicpv-nzwz" in namespace "provisioning-6247" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-nzwz [1mSTEP[0m: Creating a pod to test subpath Jun 2 21:23:47.655: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nzwz" in namespace "provisioning-6247" to be "Succeeded or Failed" Jun 2 21:23:47.723: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 67.381703ms Jun 2 21:23:49.791: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136020171s Jun 2 21:23:51.862: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207020545s Jun 2 21:23:53.932: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276867726s Jun 2 21:23:56.000: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345297628s Jun 2 21:23:58.068: INFO: Pod "pod-subpath-test-dynamicpv-nzwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.413134804s [1mSTEP[0m: Saw pod success Jun 2 21:23:58.068: INFO: Pod "pod-subpath-test-dynamicpv-nzwz" satisfied condition "Succeeded or Failed" Jun 2 21:23:58.139: INFO: Trying to get logs from node ip-172-20-53-92.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-nzwz container test-container-subpath-dynamicpv-nzwz: <nil> [1mSTEP[0m: delete the pod Jun 2 21:23:58.286: INFO: Waiting for pod pod-subpath-test-dynamicpv-nzwz to disappear Jun 2 21:23:58.353: INFO: Pod pod-subpath-test-dynamicpv-nzwz no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-nzwz Jun 2 21:23:58.353: INFO: Deleting pod "pod-subpath-test-dynamicpv-nzwz" in namespace "provisioning-6247" ... skipping 118 lines ...