This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: chore: upgrade cloud-provider-azure lib
ResultABORTED
Tests 0 failed / 0 succeeded
Started2022-04-28 02:29
Elapsed42m16s
Revision517f68332aabcdfeb36c9688b4eba198621073d2
Refs 1307

No Test Failures!


Error lines from build-log.txt

... skipping 222 lines ...

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 11156  100 11156    0     0   132k      0 --:--:-- --:--:-- --:--:--  132k
Downloading https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
docker pull k8sprow.azurecr.io/azuredisk-csi:v1.17.0-fbe93299d95515b6bc20e8c7747e28588da683bd || make container-all push-manifest
Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.17.0-fbe93299d95515b6bc20e8c7747e28588da683bd not found: manifest unknown: manifest tagged by "v1.17.0-fbe93299d95515b6bc20e8c7747e28588da683bd" is not found
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver'
CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.17.0-fbe93299d95515b6bc20e8c7747e28588da683bd -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=fbe93299d95515b6bc20e8c7747e28588da683bd -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2022-04-28T02:34:41Z -extldflags "-static""  -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin
docker buildx rm container-builder || true
error: no builder "container-builder" found
docker buildx create --use --name=container-builder
container-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
... skipping 1557 lines ...
                    type: string
                type: object
                oneOf:
                - required: ["persistentVolumeClaimName"]
                - required: ["volumeSnapshotContentName"]
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 60 lines ...
                    type: string
                  volumeSnapshotContentName:
                    description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.
                    type: string
                type: object
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 254 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 108 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 861 lines ...
          image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0"
          args:
            - "-csi-address=$(ADDRESS)"
            - "-v=2"
            - "-leader-election"
            - "--leader-election-namespace=kube-system"
            - '-handle-volume-inuse-error=false'
            - '-feature-gates=RecoverVolumeExpansionFailure=true'
            - "-timeout=240s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
... skipping 604 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:421
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":27,"completed":1,"skipped":36,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:46:37.680: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 45 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

    Distro debian doesn't support ntfs -- skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 174 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:252
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":32,"completed":1,"skipped":21,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:47:46.981: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 91 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using file as subpath [Slow][LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:331
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":38,"completed":1,"skipped":123,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:47:48.920: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 227 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:207
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":38,"completed":1,"skipped":113,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:48:05.375: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 184 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with pvc data source
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":35,"completed":1,"skipped":25,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:421
... skipping 87 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:421
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":32,"completed":2,"skipped":100,"failed":0}

SSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
... skipping 16 lines ...
Apr 28 02:48:30.859: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comwlgb4] to have phase Bound
Apr 28 02:48:30.967: INFO: PersistentVolumeClaim test.csi.azure.comwlgb4 found but phase is Pending instead of Bound.
Apr 28 02:48:33.077: INFO: PersistentVolumeClaim test.csi.azure.comwlgb4 found but phase is Pending instead of Bound.
Apr 28 02:48:35.187: INFO: PersistentVolumeClaim test.csi.azure.comwlgb4 found and phase=Bound (4.328786091s)
STEP: Creating pod pod-subpath-test-dynamicpv-k8b2
STEP: Creating a pod to test subpath
Apr 28 02:48:35.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-k8b2" in namespace "provisioning-8667" to be "Succeeded or Failed"
Apr 28 02:48:35.622: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 107.83769ms
Apr 28 02:48:37.732: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218303602s
Apr 28 02:48:39.841: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327040924s
Apr 28 02:48:41.951: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436759606s
Apr 28 02:48:44.061: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546661588s
Apr 28 02:48:46.171: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656577833s
... skipping 5 lines ...
Apr 28 02:48:58.829: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.315097009s
Apr 28 02:49:00.938: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.424262258s
Apr 28 02:49:03.047: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.532914383s
Apr 28 02:49:05.156: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.641821666s
Apr 28 02:49:07.268: INFO: Pod "pod-subpath-test-dynamicpv-k8b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.753398741s
STEP: Saw pod success
Apr 28 02:49:07.268: INFO: Pod "pod-subpath-test-dynamicpv-k8b2" satisfied condition "Succeeded or Failed"
Apr 28 02:49:07.375: INFO: Trying to get logs from node k8s-agentpool1-19612607-vmss000000 pod pod-subpath-test-dynamicpv-k8b2 container test-container-volume-dynamicpv-k8b2: <nil>
STEP: delete the pod
Apr 28 02:49:07.601: INFO: Waiting for pod pod-subpath-test-dynamicpv-k8b2 to disappear
Apr 28 02:49:07.708: INFO: Pod pod-subpath-test-dynamicpv-k8b2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-k8b2
Apr 28 02:49:07.708: INFO: Deleting pod "pod-subpath-test-dynamicpv-k8b2" in namespace "provisioning-8667"
... skipping 23 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directory
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":35,"completed":2,"skipped":45,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:136
... skipping 235 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:136
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":31,"completed":1,"skipped":8,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:50:02.512: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 143 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:321
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":30,"completed":1,"skipped":162,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy 
  (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214
... skipping 112 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":27,"completed":2,"skipped":373,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:50:25.094: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 45 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:262
------------------------------
... skipping 107 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:252
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":38,"completed":2,"skipped":211,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
... skipping 30 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 28 02:50:53.326: INFO: >>> kubeConfig: /root/tmp2972259524/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Apr 28 02:50:53.871: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 28 02:50:53.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-4818" for this suite.


S [SKIPPING] [0.768 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

    Driver didn't provide topology keys -- skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:124
------------------------------
... skipping 234 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on different node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:246
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":38,"completed":2,"skipped":140,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:50:55.697: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 24 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Distro debian doesn't support ntfs -- skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 63 lines ...
Apr 28 02:50:26.761: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com6wmwb] to have phase Bound
Apr 28 02:50:26.869: INFO: PersistentVolumeClaim test.csi.azure.com6wmwb found but phase is Pending instead of Bound.
Apr 28 02:50:28.977: INFO: PersistentVolumeClaim test.csi.azure.com6wmwb found but phase is Pending instead of Bound.
Apr 28 02:50:31.090: INFO: PersistentVolumeClaim test.csi.azure.com6wmwb found and phase=Bound (4.328914399s)
STEP: Creating pod pod-subpath-test-dynamicpv-nsv7
STEP: Creating a pod to test atomic-volume-subpath
Apr 28 02:50:31.416: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nsv7" in namespace "provisioning-3111" to be "Succeeded or Failed"
Apr 28 02:50:31.525: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Pending", Reason="", readiness=false. Elapsed: 108.58039ms
Apr 28 02:50:33.635: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218472581s
Apr 28 02:50:35.745: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328299943s
Apr 28 02:50:37.855: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438412124s
Apr 28 02:50:39.964: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548022644s
Apr 28 02:50:42.075: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658503857s
... skipping 10 lines ...
Apr 28 02:51:05.288: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Running", Reason="", readiness=true. Elapsed: 33.872041745s
Apr 28 02:51:07.399: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Running", Reason="", readiness=true. Elapsed: 35.982867977s
Apr 28 02:51:09.509: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Running", Reason="", readiness=true. Elapsed: 38.093181599s
Apr 28 02:51:11.619: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Running", Reason="", readiness=true. Elapsed: 40.202730411s
Apr 28 02:51:13.728: INFO: Pod "pod-subpath-test-dynamicpv-nsv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.312033249s
STEP: Saw pod success
Apr 28 02:51:13.728: INFO: Pod "pod-subpath-test-dynamicpv-nsv7" satisfied condition "Succeeded or Failed"
Apr 28 02:51:13.839: INFO: Trying to get logs from node k8s-agentpool1-19612607-vmss000002 pod pod-subpath-test-dynamicpv-nsv7 container test-container-subpath-dynamicpv-nsv7: <nil>
STEP: delete the pod
Apr 28 02:51:14.123: INFO: Waiting for pod pod-subpath-test-dynamicpv-nsv7 to disappear
Apr 28 02:51:14.231: INFO: Pod pod-subpath-test-dynamicpv-nsv7 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nsv7
Apr 28 02:51:14.231: INFO: Deleting pod "pod-subpath-test-dynamicpv-nsv7" in namespace "provisioning-3111"
... skipping 23 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support file as subpath [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":27,"completed":3,"skipped":522,"failed":0}

SSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes 
  should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 107 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":30,"completed":2,"skipped":193,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumes 
  should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should store data
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":35,"completed":3,"skipped":71,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:52:04.728: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 37 lines ...
Apr 28 02:50:03.297: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comm6852] to have phase Bound
Apr 28 02:50:03.406: INFO: PersistentVolumeClaim test.csi.azure.comm6852 found but phase is Pending instead of Bound.
Apr 28 02:50:05.514: INFO: PersistentVolumeClaim test.csi.azure.comm6852 found but phase is Pending instead of Bound.
Apr 28 02:50:07.622: INFO: PersistentVolumeClaim test.csi.azure.comm6852 found and phase=Bound (4.325212003s)
STEP: Creating pod pod-subpath-test-dynamicpv-6gfh
STEP: Creating a pod to test subpath
Apr 28 02:50:07.947: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6gfh" in namespace "provisioning-5872" to be "Succeeded or Failed"
Apr 28 02:50:08.056: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 108.344063ms
Apr 28 02:50:10.165: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217555982s
Apr 28 02:50:12.275: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327847403s
Apr 28 02:50:14.385: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437515737s
Apr 28 02:50:16.494: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546793198s
Apr 28 02:50:18.609: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661579005s
... skipping 6 lines ...
Apr 28 02:50:33.372: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 25.425239309s
Apr 28 02:50:35.481: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 27.533700791s
Apr 28 02:50:37.589: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 29.641734814s
Apr 28 02:50:39.698: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 31.751245563s
Apr 28 02:50:41.807: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.859689197s
STEP: Saw pod success
Apr 28 02:50:41.807: INFO: Pod "pod-subpath-test-dynamicpv-6gfh" satisfied condition "Succeeded or Failed"
Apr 28 02:50:41.915: INFO: Trying to get logs from node k8s-agentpool1-19612607-vmss000000 pod pod-subpath-test-dynamicpv-6gfh container test-container-subpath-dynamicpv-6gfh: <nil>
STEP: delete the pod
Apr 28 02:50:42.166: INFO: Waiting for pod pod-subpath-test-dynamicpv-6gfh to disappear
Apr 28 02:50:42.274: INFO: Pod pod-subpath-test-dynamicpv-6gfh no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6gfh
Apr 28 02:50:42.274: INFO: Deleting pod "pod-subpath-test-dynamicpv-6gfh" in namespace "provisioning-5872"
STEP: Creating pod pod-subpath-test-dynamicpv-6gfh
STEP: Creating a pod to test subpath
Apr 28 02:50:42.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6gfh" in namespace "provisioning-5872" to be "Succeeded or Failed"
Apr 28 02:50:42.603: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 108.509646ms
Apr 28 02:50:44.711: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216817866s
Apr 28 02:50:46.819: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325332062s
Apr 28 02:50:48.928: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43385328s
Apr 28 02:50:51.038: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543449183s
Apr 28 02:50:53.146: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652124255s
... skipping 23 lines ...
Apr 28 02:51:43.777: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.283059131s
Apr 28 02:51:45.886: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.39176605s
Apr 28 02:51:47.996: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.502283535s
Apr 28 02:51:50.105: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.610638222s
Apr 28 02:51:52.215: INFO: Pod "pod-subpath-test-dynamicpv-6gfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m9.720408427s
STEP: Saw pod success
Apr 28 02:51:52.215: INFO: Pod "pod-subpath-test-dynamicpv-6gfh" satisfied condition "Succeeded or Failed"
Apr 28 02:51:52.329: INFO: Trying to get logs from node k8s-agentpool1-19612607-vmss000001 pod pod-subpath-test-dynamicpv-6gfh container test-container-subpath-dynamicpv-6gfh: <nil>
STEP: delete the pod
Apr 28 02:51:52.580: INFO: Waiting for pod pod-subpath-test-dynamicpv-6gfh to disappear
Apr 28 02:51:52.688: INFO: Pod pod-subpath-test-dynamicpv-6gfh no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6gfh
Apr 28 02:51:52.688: INFO: Deleting pod "pod-subpath-test-dynamicpv-6gfh" in namespace "provisioning-5872"
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should support existing directories when readOnly specified in the volumeSource
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":31,"completed":2,"skipped":37,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on different node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:166
... skipping 193 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on different node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:166
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":32,"completed":3,"skipped":105,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:53:39.604: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 84 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":38,"completed":3,"skipped":284,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:54:17.042: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 140 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should provision storage with pvc data source
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":30,"completed":3,"skipped":232,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 28 02:53:05.236: INFO: >>> kubeConfig: /root/tmp2972259524/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
Apr 28 02:53:05.777: INFO: Creating resource for dynamic PV
Apr 28 02:53:05.777: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-794-e2e-sc8889x
STEP: creating a claim
Apr 28 02:53:05.885: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Apr 28 02:53:05.996: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2d6pm] to have phase Bound
Apr 28 02:53:06.103: INFO: PersistentVolumeClaim test.csi.azure.com2d6pm found but phase is Pending instead of Bound.
Apr 28 02:53:08.211: INFO: PersistentVolumeClaim test.csi.azure.com2d6pm found but phase is Pending instead of Bound.
Apr 28 02:53:10.327: INFO: PersistentVolumeClaim test.csi.azure.com2d6pm found and phase=Bound (4.330513294s)
STEP: Creating pod pod-subpath-test-dynamicpv-287t
STEP: Checking for subpath error in container status
Apr 28 02:53:30.871: INFO: Deleting pod "pod-subpath-test-dynamicpv-287t" in namespace "provisioning-794"
Apr 28 02:53:30.980: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-287t" to be fully deleted
STEP: Deleting pod
Apr 28 02:53:33.198: INFO: Deleting pod "pod-subpath-test-dynamicpv-287t" in namespace "provisioning-794"
STEP: Deleting pvc
Apr 28 02:53:33.307: INFO: Deleting PersistentVolumeClaim "test.csi.azure.com2d6pm"
... skipping 22 lines ...

• [SLOW TEST:100.376 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":31,"completed":3,"skipped":68,"failed":0}

SSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:136
... skipping 205 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:136
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":38,"completed":3,"skipped":245,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy 
  (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214
... skipping 118 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":27,"completed":4,"skipped":527,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:55:02.198: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 139 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":35,"completed":4,"skipped":195,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:55:12.772: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

    Distro debian doesn't support ntfs -- skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 128 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:421
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":38,"completed":4,"skipped":401,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":38,"completed":4,"skipped":304,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:56:24.321: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 51 lines ...
[It] should check snapshot fields, check restore correctly works, check deletion (ephemeral)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:174
Apr 28 02:53:40.275: INFO: Creating resource for dynamic PV
Apr 28 02:53:40.275: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-4249-e2e-scfpwn4
STEP: [init] starting a pod to use the claim
Apr 28 02:53:40.497: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-c2fxr" in namespace "snapshotting-4249" to be "Succeeded or Failed"
Apr 28 02:53:40.606: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 109.185112ms
Apr 28 02:53:42.717: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220121583s
Apr 28 02:53:44.828: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330999807s
Apr 28 02:53:46.939: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44236958s
Apr 28 02:53:49.051: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554364082s
Apr 28 02:53:51.163: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.666087169s
... skipping 20 lines ...
Apr 28 02:54:35.501: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 55.00393791s
Apr 28 02:54:37.612: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 57.1145842s
Apr 28 02:54:39.723: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 59.225553253s
Apr 28 02:54:41.833: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.336205058s
Apr 28 02:54:43.944: INFO: Pod "pvc-snapshottable-tester-c2fxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.446740153s
STEP: Saw pod success
Apr 28 02:54:43.944: INFO: Pod "pvc-snapshottable-tester-c2fxr" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
STEP: creating a SnapshotClass
STEP: creating a dynamic VolumeSnapshot
Apr 28 02:54:44.386: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-8l4sg to become ready
Apr 28 02:54:44.500: INFO: VolumeSnapshot snapshot-8l4sg found but is not ready.
Apr 28 02:54:46.610: INFO: VolumeSnapshot snapshot-8l4sg found but is not ready.
... skipping 39 lines ...
Apr 28 02:55:37.465: INFO: volumesnapshotcontents snapcontent-724914cd-1832-4461-a5e4-5b810a70121e has been found and is not deleted
Apr 28 02:55:38.575: INFO: volumesnapshotcontents snapcontent-724914cd-1832-4461-a5e4-5b810a70121e has been found and is not deleted
Apr 28 02:55:39.688: INFO: volumesnapshotcontents snapcontent-724914cd-1832-4461-a5e4-5b810a70121e has been found and is not deleted
Apr 28 02:55:40.821: INFO: volumesnapshotcontents snapcontent-724914cd-1832-4461-a5e4-5b810a70121e has been found and is not deleted
Apr 28 02:55:41.931: INFO: volumesnapshotcontents snapcontent-724914cd-1832-4461-a5e4-5b810a70121e has been found and is not deleted
Apr 28 02:55:43.041: INFO: volumesnapshotcontents snapcontent-724914cd-1832-4461-a5e4-5b810a70121e has been found and is not deleted
Apr 28 02:55:44.041: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:169
Apr 28 02:55:44.177: INFO: Pod restored-pvc-tester-h8lsb has the following logs: 
Apr 28 02:55:44.178: INFO: Deleting pod "restored-pvc-tester-h8lsb" in namespace "snapshotting-4249"
Apr 28 02:55:44.288: INFO: Wait up to 5m0s for pod "restored-pvc-tester-h8lsb" to be fully deleted
Apr 28 02:56:16.507: INFO: deleting snapshot "snapshotting-4249"/"snapshot-8l4sg"
... skipping 26 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:110
      
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:173
        should check snapshot fields, check restore correctly works, check deletion (ephemeral)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/snapshottable.go:174
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":32,"completed":4,"skipped":236,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:56:24.414: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

    Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:262
------------------------------
... skipping 21 lines ...
Apr 28 02:54:46.390: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com5r45d] to have phase Bound
Apr 28 02:54:46.498: INFO: PersistentVolumeClaim test.csi.azure.com5r45d found but phase is Pending instead of Bound.
Apr 28 02:54:48.608: INFO: PersistentVolumeClaim test.csi.azure.com5r45d found but phase is Pending instead of Bound.
Apr 28 02:54:50.717: INFO: PersistentVolumeClaim test.csi.azure.com5r45d found and phase=Bound (4.326923254s)
STEP: Creating pod exec-volume-test-dynamicpv-8w9n
STEP: Creating a pod to test exec-volume-test
Apr 28 02:54:51.042: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-8w9n" in namespace "volume-2614" to be "Succeeded or Failed"
Apr 28 02:54:51.150: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 107.919635ms
Apr 28 02:54:53.260: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21728666s
Apr 28 02:54:55.368: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32578629s
Apr 28 02:54:57.476: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433859547s
Apr 28 02:54:59.586: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543131834s
Apr 28 02:55:01.695: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652281078s
... skipping 2 lines ...
Apr 28 02:55:08.027: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.984447915s
Apr 28 02:55:10.135: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 19.092883304s
Apr 28 02:55:12.244: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 21.202085552s
Apr 28 02:55:14.354: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Pending", Reason="", readiness=false. Elapsed: 23.311145654s
Apr 28 02:55:16.462: INFO: Pod "exec-volume-test-dynamicpv-8w9n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.419565029s
STEP: Saw pod success
Apr 28 02:55:16.462: INFO: Pod "exec-volume-test-dynamicpv-8w9n" satisfied condition "Succeeded or Failed"
Apr 28 02:55:16.570: INFO: Trying to get logs from node k8s-agentpool1-19612607-vmss000001 pod exec-volume-test-dynamicpv-8w9n container exec-container-dynamicpv-8w9n: <nil>
STEP: delete the pod
Apr 28 02:55:16.821: INFO: Waiting for pod exec-volume-test-dynamicpv-8w9n to disappear
Apr 28 02:55:16.929: INFO: Pod exec-volume-test-dynamicpv-8w9n no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-8w9n
Apr 28 02:55:16.929: INFO: Deleting pod "exec-volume-test-dynamicpv-8w9n" in namespace "volume-2614"
... skipping 27 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":31,"completed":4,"skipped":81,"failed":0}

SSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral 
  should create read/write inline ephemeral volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:194
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":35,"completed":5,"skipped":292,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-stress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Apr 28 02:56:54.611: INFO: Driver test.csi.azure.com doesn't specify stress test options -- skipping
... skipping 24 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:262
------------------------------
... skipping 8 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:262
------------------------------
... skipping 132 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:297

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/external/external.go:262
------------------------------
... skipping 22 lines ...