This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: [WIP]test: run external tests on Windows cluster
ResultFAILURE
Tests 1 failed / 13 succeeded
Started2022-05-09 14:19
Elapsed1h29m
Revision2c451ad21962d7a173fa79054d81bdfdf8d3702f
Refs 1323
job-versionv1.25.0-alpha.0.362+543893cbb0948b
kubetest-version
revisionv1.25.0-alpha.0.362+543893cbb0948b

Test Failures


kubetest Test 1h15m

error during make e2e-test: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 13 Passed Tests

Error lines from build-log.txt

... skipping 222 lines ...

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 11156  100 11156    0     0   178k      0 --:--:-- --:--:-- --:--:--  178k
Downloading https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
docker pull k8sprow.azurecr.io/azuredisk-csi:v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d || make container-all push-manifest
Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d not found: manifest unknown: manifest tagged by "v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d" is not found
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver'
CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=75d73be167fd80191bedf5b1785eae6fb32bab5d -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2022-05-09T14:23:57Z -extldflags "-static""  -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin
docker buildx rm container-builder || true
error: no builder "container-builder" found
docker buildx create --use --name=container-builder
container-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
... skipping 1598 lines ...
                    type: string
                type: object
                oneOf:
                - required: ["persistentVolumeClaimName"]
                - required: ["volumeSnapshotContentName"]
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 60 lines ...
                    type: string
                  volumeSnapshotContentName:
                    description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.
                    type: string
                type: object
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 254 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 108 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 861 lines ...
          image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0"
          args:
            - "-csi-address=$(ADDRESS)"
            - "-v=2"
            - "-leader-election"
            - "--leader-election-namespace=kube-system"
            - '-handle-volume-inuse-error=false'
            - '-feature-gates=RecoverVolumeExpansionFailure=true'
            - "-timeout=240s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
... skipping 496 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:258

    Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 56 lines ...

        test/e2e/storage/testsuites/snapshottable.go:280
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:242

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187
... skipping 2 lines ...
STEP: Building a namespace api object, basename provisioning
W0509 14:35:23.806360   34036 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  9 14:35:23.806: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  9 14:35:23.924: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:242
May  9 14:35:24.363: INFO: Creating resource for dynamic PV
May  9 14:35:24.363: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8465-e2e-scskt9d
STEP: creating a claim
May  9 14:35:24.470: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May  9 14:35:24.583: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comcb9br] to have phase Bound
May  9 14:35:24.689: INFO: PersistentVolumeClaim test.csi.azure.comcb9br found but phase is Pending instead of Bound.
May  9 14:35:26.797: INFO: PersistentVolumeClaim test.csi.azure.comcb9br found but phase is Pending instead of Bound.
May  9 14:35:28.906: INFO: PersistentVolumeClaim test.csi.azure.comcb9br found and phase=Bound (4.323892309s)
STEP: Creating pod pod-subpath-test-dynamicpv-mvvd
STEP: Checking for subpath error in container status
May  9 14:35:55.461: INFO: Deleting pod "pod-subpath-test-dynamicpv-mvvd" in namespace "provisioning-8465"
May  9 14:35:55.572: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-mvvd" to be fully deleted
STEP: Deleting pod
May  9 14:35:57.788: INFO: Deleting pod "pod-subpath-test-dynamicpv-mvvd" in namespace "provisioning-8465"
STEP: Deleting pvc
May  9 14:35:57.895: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comcb9br"
... skipping 21 lines ...

• [SLOW TEST:101.768 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:242
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":30,"completed":1,"skipped":32,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy 
  (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
  test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
... skipping 100 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:50
    (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
    test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":43,"completed":1,"skipped":204,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-stress
  test/e2e/storage/framework/testsuite.go:51
May  9 14:37:37.152: INFO: Driver test.csi.azure.com doesn't specify stress test options -- skipping
... skipping 216 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:298
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":37,"completed":1,"skipped":5,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] 
  should access to two volumes with different volume mode and retain data across pod recreation on different node
  test/e2e/storage/testsuites/multivolume.go:248
... skipping 191 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:248
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":32,"completed":1,"skipped":231,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (xfs)][Slow] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 14:39:21.354: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 259 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:168
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":37,"completed":1,"skipped":95,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
May  9 14:39:32.161: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    test/e2e/storage/testsuites/volumemode.go:299

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 202 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:168
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":27,"completed":1,"skipped":28,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] 
  should access to two volumes with different volume mode and retain data across pod recreation on the same node
  test/e2e/storage/testsuites/multivolume.go:209
... skipping 195 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:209
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":37,"completed":2,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:40:10.347: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 134 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should store data
    test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":30,"completed":2,"skipped":56,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 14:40:49.941: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 34 lines ...

    test/e2e/storage/external/external.go:262
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:258

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 14:39:32.180: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:258
May  9 14:39:32.937: INFO: Creating resource for dynamic PV
May  9 14:39:32.937: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6541-e2e-sclz5jg
STEP: creating a claim
May  9 14:39:33.049: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May  9 14:39:33.163: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comnzrzh] to have phase Bound
May  9 14:39:33.272: INFO: PersistentVolumeClaim test.csi.azure.comnzrzh found but phase is Pending instead of Bound.
May  9 14:39:35.381: INFO: PersistentVolumeClaim test.csi.azure.comnzrzh found but phase is Pending instead of Bound.
May  9 14:39:37.490: INFO: PersistentVolumeClaim test.csi.azure.comnzrzh found and phase=Bound (4.326967247s)
STEP: Creating pod pod-subpath-test-dynamicpv-gqxf
STEP: Checking for subpath error in container status
May  9 14:40:10.033: INFO: Deleting pod "pod-subpath-test-dynamicpv-gqxf" in namespace "provisioning-6541"
May  9 14:40:10.146: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-gqxf" to be fully deleted
STEP: Deleting pod
May  9 14:40:12.368: INFO: Deleting pod "pod-subpath-test-dynamicpv-gqxf" in namespace "provisioning-6541"
STEP: Deleting pvc
May  9 14:40:12.476: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comnzrzh"
... skipping 22 lines ...

• [SLOW TEST:112.591 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:258
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":37,"completed":2,"skipped":148,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:41:24.785: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 125 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should support two pods which have the same volume definition
    test/e2e/storage/testsuites/ephemeral.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":31,"completed":1,"skipped":8,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node
  test/e2e/storage/testsuites/multivolume.go:138
... skipping 191 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:138
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":27,"completed":2,"skipped":58,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:42:28.211: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 54 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:258

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 361 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:50
    should provision storage with pvc data source in parallel [Slow]
    test/e2e/storage/testsuites/provisioning.go:459
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]","total":43,"completed":2,"skipped":249,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:42:55.426: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:280

    Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 43 lines ...
May  9 14:40:11.343: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comfxrts] to have phase Bound
May  9 14:40:11.451: INFO: PersistentVolumeClaim test.csi.azure.comfxrts found but phase is Pending instead of Bound.
May  9 14:40:13.559: INFO: PersistentVolumeClaim test.csi.azure.comfxrts found but phase is Pending instead of Bound.
May  9 14:40:15.668: INFO: PersistentVolumeClaim test.csi.azure.comfxrts found and phase=Bound (4.324764602s)
STEP: Creating pod exec-volume-test-dynamicpv-ckdd
STEP: Creating a pod to test exec-volume-test
May  9 14:40:15.993: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-ckdd" in namespace "volume-741" to be "Succeeded or Failed"
May  9 14:40:16.105: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 112.010282ms
May  9 14:40:18.214: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220725859s
May  9 14:40:20.323: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330042908s
May  9 14:40:22.432: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438432194s
May  9 14:40:24.541: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548180661s
May  9 14:40:26.651: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657324294s
... skipping 7 lines ...
May  9 14:40:43.535: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 27.541597549s
May  9 14:40:45.644: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.65048375s
May  9 14:40:47.754: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.760319825s
May  9 14:40:49.865: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.871771671s
May  9 14:40:51.975: INFO: Pod "exec-volume-test-dynamicpv-ckdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.981436722s
STEP: Saw pod success
May  9 14:40:51.975: INFO: Pod "exec-volume-test-dynamicpv-ckdd" satisfied condition "Succeeded or Failed"
May  9 14:40:52.082: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000000 pod exec-volume-test-dynamicpv-ckdd container exec-container-dynamicpv-ckdd: <nil>
STEP: delete the pod
May  9 14:40:52.342: INFO: Waiting for pod exec-volume-test-dynamicpv-ckdd to disappear
May  9 14:40:52.450: INFO: Pod exec-volume-test-dynamicpv-ckdd no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-ckdd
May  9 14:40:52.450: INFO: Deleting pod "exec-volume-test-dynamicpv-ckdd" in namespace "volume-741"
... skipping 39 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":37,"completed":3,"skipped":67,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
May  9 14:43:06.195: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach]
    test/e2e/storage/testsuites/volumemode.go:299

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 124 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:50
    (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
    test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":32,"completed":2,"skipped":307,"failed":0}

S
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  test/e2e/storage/testsuites/volumemode.go:299

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 14:41:24.794: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  test/e2e/storage/testsuites/volumemode.go:299
May  9 14:41:25.546: INFO: Creating resource for dynamic PV
May  9 14:41:25.546: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-4849-e2e-schnjtg
STEP: creating a claim
May  9 14:41:25.771: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrp6wz] to have phase Bound
May  9 14:41:25.878: INFO: PersistentVolumeClaim test.csi.azure.comrp6wz found but phase is Pending instead of Bound.
May  9 14:41:27.986: INFO: PersistentVolumeClaim test.csi.azure.comrp6wz found but phase is Pending instead of Bound.
May  9 14:41:30.094: INFO: PersistentVolumeClaim test.csi.azure.comrp6wz found and phase=Bound (4.323202751s)
STEP: Creating pod
STEP: Waiting for the pod to fail
May  9 14:41:32.746: INFO: Deleting pod "pod-759ac1cc-651f-4e19-883e-d9d9c98d02a7" in namespace "volumemode-4849"
May  9 14:41:32.859: INFO: Wait up to 5m0s for pod "pod-759ac1cc-651f-4e19-883e-d9d9c98d02a7" to be fully deleted
STEP: Deleting pvc
May  9 14:41:35.077: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comrp6wz"
May  9 14:41:35.194: INFO: Waiting up to 5m0s for PersistentVolume pvc-1f8b7677-548a-47ca-8dd5-3885e002d7ee to get deleted
May  9 14:41:35.301: INFO: PersistentVolume pvc-1f8b7677-548a-47ca-8dd5-3885e002d7ee found and phase=Released (107.328591ms)
... skipping 32 lines ...

• [SLOW TEST:144.054 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    test/e2e/storage/testsuites/volumemode.go:299
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":37,"completed":3,"skipped":165,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node
  test/e2e/storage/testsuites/multivolume.go:298
... skipping 151 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:298
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":43,"completed":3,"skipped":281,"failed":0}

SSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works, check deletion (ephemeral)
  test/e2e/storage/testsuites/snapshottable.go:177
... skipping 10 lines ...
[It] should check snapshot fields, check restore correctly works, check deletion (ephemeral)
  test/e2e/storage/testsuites/snapshottable.go:177
May  9 14:40:50.780: INFO: Creating resource for dynamic PV
May  9 14:40:50.780: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-5546-e2e-scs8vg4
STEP: [init] starting a pod to use the claim
May  9 14:40:50.998: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-df6w4" in namespace "snapshotting-5546" to be "Succeeded or Failed"
May  9 14:40:51.109: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 109.980418ms
May  9 14:40:53.217: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218844016s
May  9 14:40:55.326: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327822824s
May  9 14:40:57.435: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436762424s
May  9 14:40:59.545: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545981223s
May  9 14:41:01.652: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65377196s
... skipping 41 lines ...
May  9 14:42:30.252: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.253216144s
May  9 14:42:32.361: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.362145284s
May  9 14:42:34.469: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.47092732s
May  9 14:42:36.581: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.582406957s
May  9 14:42:38.690: INFO: Pod "pvc-snapshottable-tester-df6w4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m47.691276241s
STEP: Saw pod success
May  9 14:42:38.690: INFO: Pod "pvc-snapshottable-tester-df6w4" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
STEP: creating a SnapshotClass
STEP: creating a dynamic VolumeSnapshot
May  9 14:42:39.135: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-2rh78 to become ready
May  9 14:42:39.243: INFO: VolumeSnapshot snapshot-2rh78 found but is not ready.
May  9 14:42:41.352: INFO: VolumeSnapshot snapshot-2rh78 found but is not ready.
... skipping 40 lines ...
May  9 14:44:55.312: INFO: volumesnapshotcontents snapcontent-50b083ca-e03f-4e88-aae0-867b1fea011f has been found and is not deleted
May  9 14:44:56.420: INFO: volumesnapshotcontents snapcontent-50b083ca-e03f-4e88-aae0-867b1fea011f has been found and is not deleted
May  9 14:44:57.529: INFO: volumesnapshotcontents snapcontent-50b083ca-e03f-4e88-aae0-867b1fea011f has been found and is not deleted
May  9 14:44:58.637: INFO: volumesnapshotcontents snapcontent-50b083ca-e03f-4e88-aae0-867b1fea011f has been found and is not deleted
May  9 14:44:59.746: INFO: volumesnapshotcontents snapcontent-50b083ca-e03f-4e88-aae0-867b1fea011f has been found and is not deleted
May  9 14:45:00.854: INFO: volumesnapshotcontents snapcontent-50b083ca-e03f-4e88-aae0-867b1fea011f has been found and is not deleted
May  9 14:45:01.855: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  test/e2e/storage/testsuites/snapshottable.go:172
May  9 14:45:01.997: INFO: Pod restored-pvc-tester-7dtxr has the following logs: 
May  9 14:45:01.998: INFO: Deleting pod "restored-pvc-tester-7dtxr" in namespace "snapshotting-5546"
May  9 14:45:02.109: INFO: Wait up to 5m0s for pod "restored-pvc-tester-7dtxr" to be fully deleted
May  9 14:45:34.326: INFO: deleting snapshot "snapshotting-5546"/"snapshot-2rh78"
... skipping 26 lines ...
    test/e2e/storage/testsuites/snapshottable.go:113
      
      test/e2e/storage/testsuites/snapshottable.go:176
        should check snapshot fields, check restore correctly works, check deletion (ephemeral)
        test/e2e/storage/testsuites/snapshottable.go:177
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":30,"completed":3,"skipped":158,"failed":0}

SSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should access to two volumes with different volume mode and retain data across pod recreation on the same node
  test/e2e/storage/testsuites/multivolume.go:209
... skipping 207 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:209
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":31,"completed":2,"skipped":103,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:45:51.098: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:242

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 50 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:269

    Distro debian doesn't support ntfs -- skipping

    test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 22 lines ...
May  9 14:43:07.182: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comf5x7d] to have phase Bound
May  9 14:43:07.291: INFO: PersistentVolumeClaim test.csi.azure.comf5x7d found but phase is Pending instead of Bound.
May  9 14:43:09.399: INFO: PersistentVolumeClaim test.csi.azure.comf5x7d found but phase is Pending instead of Bound.
May  9 14:43:11.509: INFO: PersistentVolumeClaim test.csi.azure.comf5x7d found and phase=Bound (4.326425295s)
STEP: [init] starting a pod to use the claim
STEP: [init] check pod success
May  9 14:43:11.943: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-fmpmd" in namespace "snapshotting-5911" to be "Succeeded or Failed"
May  9 14:43:12.051: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 108.314403ms
May  9 14:43:14.160: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217408955s
May  9 14:43:16.269: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326347068s
May  9 14:43:18.378: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435712273s
May  9 14:43:20.488: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545483169s
May  9 14:43:22.596: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653672684s
... skipping 25 lines ...
May  9 14:44:17.437: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.493763986s
May  9 14:44:19.546: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.602852621s
May  9 14:44:21.656: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.712936825s
May  9 14:44:23.767: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.823971103s
May  9 14:44:25.875: INFO: Pod "pvc-snapshottable-tester-fmpmd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m13.932148442s
STEP: Saw pod success
May  9 14:44:25.875: INFO: Pod "pvc-snapshottable-tester-fmpmd" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
May  9 14:44:25.983: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comf5x7d] to have phase Bound
May  9 14:44:26.091: INFO: PersistentVolumeClaim test.csi.azure.comf5x7d found and phase=Bound (107.834007ms)
STEP: [init] checking the PV
STEP: [init] deleting the pod
May  9 14:44:26.445: INFO: Pod pvc-snapshottable-tester-fmpmd has the following logs: 
... skipping 37 lines ...
May  9 14:44:43.791: INFO: WaitUntil finished successfully after 110.329645ms
STEP: getting the snapshot and snapshot content
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  9 14:44:44.350: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-n9pnb" in namespace "snapshotting-5911" to be "Succeeded or Failed"
May  9 14:44:44.458: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 108.496234ms
May  9 14:44:46.568: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217803255s
May  9 14:44:48.678: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327640628s
May  9 14:44:50.787: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437303374s
May  9 14:44:52.896: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545609217s
May  9 14:44:55.004: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653616874s
... skipping 26 lines ...
May  9 14:45:51.969: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.619533992s
May  9 14:45:54.078: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.727995879s
May  9 14:45:56.190: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.839862593s
May  9 14:45:58.299: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.949534613s
May  9 14:46:00.408: INFO: Pod "pvc-snapshottable-data-tester-n9pnb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m16.058453611s
STEP: Saw pod success
May  9 14:46:00.408: INFO: Pod "pvc-snapshottable-data-tester-n9pnb" satisfied condition "Succeeded or Failed"
May  9 14:46:00.654: INFO: Pod pvc-snapshottable-data-tester-n9pnb has the following logs: 
May  9 14:46:00.654: INFO: Deleting pod "pvc-snapshottable-data-tester-n9pnb" in namespace "snapshotting-5911"
May  9 14:46:00.772: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-n9pnb" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the snapshot
May  9 14:46:33.332: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rxirza6l.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp487086944/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-5911 exec restored-pvc-tester-tcjr5 --namespace=snapshotting-5911 -- cat /mnt/test/data'
... skipping 47 lines ...
    test/e2e/storage/testsuites/snapshottable.go:113
      
      test/e2e/storage/testsuites/snapshottable.go:176
        should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
        test/e2e/storage/testsuites/snapshottable.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":37,"completed":4,"skipped":79,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 14:47:19.524: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 24 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:258

    Distro debian doesn't support ntfs -- skipping

    test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 166 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:50
    (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
    test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":37,"completed":4,"skipped":183,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:47:35.513: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 228 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":43,"completed":4,"skipped":290,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:47:43.732: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:242

    Distro debian doesn't support ntfs -- skipping

    test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 219 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:248
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":27,"completed":3,"skipped":82,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:47:45.447: INFO: Driver test.csi.azure.com doesn't specify snapshot stress test options -- skipping
... skipping 124 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:50
    should provision storage with pvc data source
    test/e2e/storage/testsuites/provisioning.go:421
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":32,"completed":3,"skipped":308,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumeIO 
  should write files of various sizes, verify size, validate content [Slow]
  test/e2e/storage/testsuites/volume_io.go:149
... skipping 52 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] volumeIO
  test/e2e/storage/framework/testsuite.go:50
    should write files of various sizes, verify size, validate content [Slow]
    test/e2e/storage/testsuites/volume_io.go:149
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]","total":37,"completed":5,"skipped":366,"failed":0}

SSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support creating multiple subpath from same volumes [Slow]
  test/e2e/storage/testsuites/subpath.go:296
... skipping 17 lines ...
May  9 14:47:44.716: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com54jkh] to have phase Bound
May  9 14:47:44.823: INFO: PersistentVolumeClaim test.csi.azure.com54jkh found but phase is Pending instead of Bound.
May  9 14:47:46.932: INFO: PersistentVolumeClaim test.csi.azure.com54jkh found but phase is Pending instead of Bound.
May  9 14:47:49.041: INFO: PersistentVolumeClaim test.csi.azure.com54jkh found and phase=Bound (4.324324199s)
STEP: Creating pod pod-subpath-test-dynamicpv-js84
STEP: Creating a pod to test multi_subpath
May  9 14:47:49.369: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-js84" in namespace "provisioning-8814" to be "Succeeded or Failed"
May  9 14:47:49.476: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 107.394247ms
May  9 14:47:51.585: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216314539s
May  9 14:47:53.695: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32597455s
May  9 14:47:55.802: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433469222s
May  9 14:47:57.910: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541822939s
May  9 14:48:00.019: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650175849s
... skipping 3 lines ...
May  9 14:48:08.458: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 19.089108616s
May  9 14:48:10.568: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 21.199295308s
May  9 14:48:12.678: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 23.309189343s
May  9 14:48:14.786: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Pending", Reason="", readiness=false. Elapsed: 25.417753783s
May  9 14:48:16.895: INFO: Pod "pod-subpath-test-dynamicpv-js84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.526690946s
STEP: Saw pod success
May  9 14:48:16.895: INFO: Pod "pod-subpath-test-dynamicpv-js84" satisfied condition "Succeeded or Failed"
May  9 14:48:17.002: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000002 pod pod-subpath-test-dynamicpv-js84 container test-container-subpath-dynamicpv-js84: <nil>
STEP: delete the pod
May  9 14:48:17.281: INFO: Waiting for pod pod-subpath-test-dynamicpv-js84 to disappear
May  9 14:48:17.388: INFO: Pod pod-subpath-test-dynamicpv-js84 no longer exists
STEP: Deleting pod
May  9 14:48:17.388: INFO: Deleting pod "pod-subpath-test-dynamicpv-js84" in namespace "provisioning-8814"
... skipping 27 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support creating multiple subpath from same volumes [Slow]
    test/e2e/storage/testsuites/subpath.go:296
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":43,"completed":5,"skipped":301,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource]
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource]
... skipping 168 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:423
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":32,"completed":4,"skipped":375,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node
  test/e2e/storage/testsuites/multivolume.go:138
... skipping 196 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:138
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":27,"completed":4,"skipped":101,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 14:50:31.327: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 80 lines ...
May  9 14:49:31.841: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.combww6v] to have phase Bound
May  9 14:49:31.948: INFO: PersistentVolumeClaim test.csi.azure.combww6v found but phase is Pending instead of Bound.
May  9 14:49:34.056: INFO: PersistentVolumeClaim test.csi.azure.combww6v found but phase is Pending instead of Bound.
May  9 14:49:36.164: INFO: PersistentVolumeClaim test.csi.azure.combww6v found and phase=Bound (4.322665615s)
STEP: Creating pod exec-volume-test-dynamicpv-qh94
STEP: Creating a pod to test exec-volume-test
May  9 14:49:36.489: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-qh94" in namespace "volume-8513" to be "Succeeded or Failed"
May  9 14:49:36.597: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 108.534861ms
May  9 14:49:38.715: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226053886s
May  9 14:49:40.823: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334092066s
May  9 14:49:42.934: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445118139s
May  9 14:49:45.042: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553635038s
May  9 14:49:47.151: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662574275s
... skipping 32 lines ...
May  9 14:50:56.753: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.2642931s
May  9 14:50:58.861: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.371658839s
May  9 14:51:00.968: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.479641157s
May  9 14:51:03.078: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.589119446s
May  9 14:51:05.186: INFO: Pod "exec-volume-test-dynamicpv-qh94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m28.69672555s
STEP: Saw pod success
May  9 14:51:05.186: INFO: Pod "exec-volume-test-dynamicpv-qh94" satisfied condition "Succeeded or Failed"
May  9 14:51:05.293: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000000 pod exec-volume-test-dynamicpv-qh94 container exec-container-dynamicpv-qh94: <nil>
STEP: delete the pod
May  9 14:51:05.556: INFO: Waiting for pod exec-volume-test-dynamicpv-qh94 to disappear
May  9 14:51:05.663: INFO: Pod exec-volume-test-dynamicpv-qh94 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-qh94
May  9 14:51:05.663: INFO: Deleting pod "exec-volume-test-dynamicpv-qh94" in namespace "volume-8513"
... skipping 27 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] volumes
  test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume","total":43,"completed":6,"skipped":397,"failed":0}

SSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
  test/e2e/storage/testsuites/multivolume.go:378
... skipping 98 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    test/e2e/storage/testsuites/multivolume.go:378
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":37,"completed":6,"skipped":376,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes 
  should store data
  test/e2e/storage/testsuites/volumes.go:161
... skipping 108 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should store data
    test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":32,"completed":5,"skipped":421,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:52:46.474: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 80 lines ...
May  9 14:45:52.222: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comlhsgv] to have phase Bound
May  9 14:45:52.332: INFO: PersistentVolumeClaim test.csi.azure.comlhsgv found but phase is Pending instead of Bound.
May  9 14:45:54.443: INFO: PersistentVolumeClaim test.csi.azure.comlhsgv found but phase is Pending instead of Bound.
May  9 14:45:56.552: INFO: PersistentVolumeClaim test.csi.azure.comlhsgv found and phase=Bound (4.329610863s)
STEP: [init] starting a pod to use the claim
STEP: [init] check pod success
May  9 14:45:56.988: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-7fvmq" in namespace "snapshotting-13" to be "Succeeded or Failed"
May  9 14:45:57.097: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 108.94695ms
May  9 14:45:59.208: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220263979s
May  9 14:46:01.318: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329776459s
May  9 14:46:03.429: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441361195s
May  9 14:46:05.540: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552285062s
May  9 14:46:07.651: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66341349s
... skipping 9 lines ...
May  9 14:46:28.749: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 31.760845353s
May  9 14:46:30.859: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 33.870540664s
May  9 14:46:32.975: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 35.98702852s
May  9 14:46:35.086: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.098139268s
May  9 14:46:37.197: INFO: Pod "pvc-snapshottable-tester-7fvmq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.208524064s
STEP: Saw pod success
May  9 14:46:37.197: INFO: Pod "pvc-snapshottable-tester-7fvmq" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
May  9 14:46:37.305: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comlhsgv] to have phase Bound
May  9 14:46:37.423: INFO: PersistentVolumeClaim test.csi.azure.comlhsgv found and phase=Bound (117.056701ms)
STEP: [init] checking the PV
STEP: [init] deleting the pod
May  9 14:46:37.916: INFO: Pod pvc-snapshottable-tester-7fvmq has the following logs: 
... skipping 14 lines ...
May  9 14:46:47.595: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 creationTime:2022-05-09T14:46:43Z readyToUse:true restoreSize:5Gi]
May  9 14:46:47.595: INFO: snapshotContentName snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  9 14:46:48.033: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-wgwm6" in namespace "snapshotting-13" to be "Succeeded or Failed"
May  9 14:46:48.143: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 109.343179ms
May  9 14:46:50.253: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219860191s
May  9 14:46:52.364: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330254801s
May  9 14:46:54.473: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439061216s
May  9 14:46:56.584: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550478942s
May  9 14:46:58.694: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660034368s
... skipping 38 lines ...
May  9 14:48:20.994: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.960910955s
May  9 14:48:23.105: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.071653486s
May  9 14:48:25.216: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.1826363s
May  9 14:48:27.326: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.292573924s
May  9 14:48:29.441: INFO: Pod "pvc-snapshottable-data-tester-wgwm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m41.407806341s
STEP: Saw pod success
May  9 14:48:29.441: INFO: Pod "pvc-snapshottable-data-tester-wgwm6" satisfied condition "Succeeded or Failed"
May  9 14:48:29.695: INFO: Pod pvc-snapshottable-data-tester-wgwm6 has the following logs: 
May  9 14:48:29.695: INFO: Deleting pod "pvc-snapshottable-data-tester-wgwm6" in namespace "snapshotting-13"
May  9 14:48:29.812: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-wgwm6" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the snapshot
May  9 14:52:24.366: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rxirza6l.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp487086944/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-13 exec restored-pvc-tester-x94sj --namespace=snapshotting-13 -- cat /mnt/test/data'
... skipping 33 lines ...
May  9 14:52:55.666: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
May  9 14:52:56.776: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
May  9 14:52:57.886: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
May  9 14:52:58.995: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
May  9 14:53:00.106: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
May  9 14:53:01.215: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
May  9 14:53:02.215: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  test/e2e/storage/testsuites/snapshottable.go:172
May  9 14:53:02.396: INFO: Error getting logs for pod restored-pvc-tester-x94sj: the server could not find the requested resource (get pods restored-pvc-tester-x94sj)
May  9 14:53:02.396: INFO: Deleting pod "restored-pvc-tester-x94sj" in namespace "snapshotting-13"
May  9 14:53:02.505: INFO: deleting claim "snapshotting-13"/"pvc-k7nqv"
May  9 14:53:02.613: INFO: deleting snapshot "snapshotting-13"/"snapshot-4m7zz"
May  9 14:53:02.722: INFO: deleting snapshot content "snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939"
May  9 14:53:03.064: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 to be deleted
May  9 14:53:03.174: INFO: volumesnapshotcontents snapcontent-486f2ea1-9ffc-4981-a9bc-86489447a939 has been found and is not deleted
... skipping 27 lines ...
    test/e2e/storage/testsuites/snapshottable.go:113
      
      test/e2e/storage/testsuites/snapshottable.go:176
        should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
        test/e2e/storage/testsuites/snapshottable.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":31,"completed":3,"skipped":320,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:53:20.726: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:188

... skipping 382 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:50
    should provision storage with pvc data source in parallel [Slow]
    test/e2e/storage/testsuites/provisioning.go:459
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]","total":37,"completed":5,"skipped":346,"failed":0}

SSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directories when readOnly specified in the volumeSource
  test/e2e/storage/testsuites/subpath.go:397
... skipping 17 lines ...
May  9 14:52:19.033: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.coms95j4] to have phase Bound
May  9 14:52:19.140: INFO: PersistentVolumeClaim test.csi.azure.coms95j4 found but phase is Pending instead of Bound.
May  9 14:52:21.249: INFO: PersistentVolumeClaim test.csi.azure.coms95j4 found but phase is Pending instead of Bound.
May  9 14:52:23.358: INFO: PersistentVolumeClaim test.csi.azure.coms95j4 found and phase=Bound (4.324927815s)
STEP: Creating pod pod-subpath-test-dynamicpv-498q
STEP: Creating a pod to test subpath
May  9 14:52:23.682: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-498q" in namespace "provisioning-7535" to be "Succeeded or Failed"
May  9 14:52:23.801: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 118.884503ms
May  9 14:52:25.910: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227885582s
May  9 14:52:28.019: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336663547s
May  9 14:52:30.127: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445314008s
May  9 14:52:32.236: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553770497s
May  9 14:52:34.344: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661797161s
... skipping 3 lines ...
May  9 14:52:42.780: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 19.098513053s
May  9 14:52:44.891: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 21.208661616s
May  9 14:52:46.999: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 23.316971512s
May  9 14:52:49.108: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 25.426231316s
May  9 14:52:51.217: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.535133687s
STEP: Saw pod success
May  9 14:52:51.217: INFO: Pod "pod-subpath-test-dynamicpv-498q" satisfied condition "Succeeded or Failed"
May  9 14:52:51.325: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod pod-subpath-test-dynamicpv-498q container test-container-subpath-dynamicpv-498q: <nil>
STEP: delete the pod
May  9 14:52:51.600: INFO: Waiting for pod pod-subpath-test-dynamicpv-498q to disappear
May  9 14:52:51.707: INFO: Pod pod-subpath-test-dynamicpv-498q no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-498q
May  9 14:52:51.707: INFO: Deleting pod "pod-subpath-test-dynamicpv-498q" in namespace "provisioning-7535"
STEP: Creating pod pod-subpath-test-dynamicpv-498q
STEP: Creating a pod to test subpath
May  9 14:52:51.934: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-498q" in namespace "provisioning-7535" to be "Succeeded or Failed"
May  9 14:52:52.042: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Pending", Reason="", readiness=false. Elapsed: 108.122707ms
May  9 14:52:54.151: INFO: Pod "pod-subpath-test-dynamicpv-498q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.216607233s
STEP: Saw pod success
May  9 14:52:54.151: INFO: Pod "pod-subpath-test-dynamicpv-498q" satisfied condition "Succeeded or Failed"
May  9 14:52:54.259: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod pod-subpath-test-dynamicpv-498q container test-container-subpath-dynamicpv-498q: <nil>
STEP: delete the pod
May  9 14:52:54.488: INFO: Waiting for pod pod-subpath-test-dynamicpv-498q to disappear
May  9 14:52:54.596: INFO: Pod pod-subpath-test-dynamicpv-498q no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-498q
May  9 14:52:54.596: INFO: Deleting pod "pod-subpath-test-dynamicpv-498q" in namespace "provisioning-7535"
... skipping 29 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support existing directories when readOnly specified in the volumeSource
    test/e2e/storage/testsuites/subpath.go:397
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":43,"completed":7,"skipped":400,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 14:54:07.126: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 126 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using directory as subpath [Slow]
    test/e2e/storage/testsuites/subpath.go:322
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":27,"completed":5,"skipped":332,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:54:12.647: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:280

    Distro debian doesn't support ntfs -- skipping

    test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 130 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  test/e2e/storage/framework/testsuite.go:50
    should support volume limits [Serial]
    test/e2e/storage/testsuites/volumelimits.go:127
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]","total":30,"completed":4,"skipped":165,"failed":0}

SSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits 
  should verify that all csinodes have volume limits
  test/e2e/storage/testsuites/volumelimits.go:249
... skipping 16 lines ...
  test/e2e/framework/framework.go:188
May  9 14:54:53.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumelimits-1134" for this suite.

•
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":30,"completed":5,"skipped":171,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should not mount / map unused volumes in a pod [LinuxOnly]
  test/e2e/storage/testsuites/volumemode.go:354
... skipping 81 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:50
    should not mount / map unused volumes in a pod [LinuxOnly]
    test/e2e/storage/testsuites/volumemode.go:354
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":37,"completed":7,"skipped":438,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node
  test/e2e/storage/testsuites/multivolume.go:138
... skipping 188 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:138
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":32,"completed":6,"skipped":550,"failed":0}

SSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  test/e2e/storage/testsuites/multivolume.go:423
... skipping 82 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:423
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":31,"completed":4,"skipped":348,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 84 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    test/e2e/storage/testsuites/ephemeral.go:175
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":37,"completed":6,"skipped":349,"failed":0}

SSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy 
  (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
  test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
... skipping 113 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:50
    (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
    test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":30,"completed":6,"skipped":216,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 14:56:25.215: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 80 lines ...
May  9 14:54:08.196: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com7z2v8] to have phase Bound
May  9 14:54:08.303: INFO: PersistentVolumeClaim test.csi.azure.com7z2v8 found but phase is Pending instead of Bound.
May  9 14:54:10.411: INFO: PersistentVolumeClaim test.csi.azure.com7z2v8 found but phase is Pending instead of Bound.
May  9 14:54:12.521: INFO: PersistentVolumeClaim test.csi.azure.com7z2v8 found and phase=Bound (4.324861283s)
STEP: [init] starting a pod to use the claim
STEP: [init] check pod success
May  9 14:54:12.952: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-svnpx" in namespace "snapshotting-6054" to be "Succeeded or Failed"
May  9 14:54:13.059: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 106.852655ms
May  9 14:54:15.167: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214890202s
May  9 14:54:17.276: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3242061s
May  9 14:54:19.386: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434189679s
May  9 14:54:21.494: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542755067s
May  9 14:54:23.603: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651302796s
... skipping 5 lines ...
May  9 14:54:36.258: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 23.306270408s
May  9 14:54:38.366: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 25.414121085s
May  9 14:54:40.474: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 27.522618183s
May  9 14:54:42.583: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Pending", Reason="", readiness=false. Elapsed: 29.631213459s
May  9 14:54:44.691: INFO: Pod "pvc-snapshottable-tester-svnpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.739474697s
STEP: Saw pod success
May  9 14:54:44.691: INFO: Pod "pvc-snapshottable-tester-svnpx" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
May  9 14:54:44.799: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com7z2v8] to have phase Bound
May  9 14:54:44.930: INFO: PersistentVolumeClaim test.csi.azure.com7z2v8 found and phase=Bound (130.241428ms)
STEP: [init] checking the PV
STEP: [init] deleting the pod
May  9 14:54:45.317: INFO: Pod pvc-snapshottable-tester-svnpx has the following logs: 
... skipping 15 lines ...
May  9 14:54:57.158: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-e16da97a-04c7-4c35-9f4f-7074712f09ca creationTime:2022-05-09T14:54:52Z readyToUse:true restoreSize:5Gi]
May  9 14:54:57.158: INFO: snapshotContentName snapcontent-e16da97a-04c7-4c35-9f4f-7074712f09ca
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  9 14:54:57.594: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-4rxk5" in namespace "snapshotting-6054" to be "Succeeded or Failed"
May  9 14:54:57.701: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 107.110905ms
May  9 14:54:59.811: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216740029s
May  9 14:55:01.920: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325686755s
May  9 14:55:04.028: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433777842s
May  9 14:55:06.136: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541542575s
May  9 14:55:08.247: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652983505s
... skipping 18 lines ...
May  9 14:55:48.312: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.717895448s
May  9 14:55:50.421: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.826825006s
May  9 14:55:52.531: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.936879569s
May  9 14:55:54.640: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Pending", Reason="", readiness=false. Elapsed: 57.045994215s
May  9 14:55:56.749: INFO: Pod "pvc-snapshottable-data-tester-4rxk5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 59.155165508s
STEP: Saw pod success
May  9 14:55:56.749: INFO: Pod "pvc-snapshottable-data-tester-4rxk5" satisfied condition "Succeeded or Failed"
May  9 14:55:56.994: INFO: Pod pvc-snapshottable-data-tester-4rxk5 has the following logs: 
May  9 14:55:56.994: INFO: Deleting pod "pvc-snapshottable-data-tester-4rxk5" in namespace "snapshotting-6054"
May  9 14:55:57.112: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-4rxk5" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the snapshot
May  9 14:56:17.658: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rxirza6l.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp487086944/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-6054 exec restored-pvc-tester-hhqt6 --namespace=snapshotting-6054 -- cat /mnt/test/data'
... skipping 47 lines ...
    test/e2e/storage/testsuites/snapshottable.go:113
      
      test/e2e/storage/testsuites/snapshottable.go:176
        should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
        test/e2e/storage/testsuites/snapshottable.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":43,"completed":8,"skipped":501,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:57:02.964: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow]
  test/e2e/framework/framework.go:188

... skipping 52 lines ...

    test/e2e/storage/framework/testsuite.go:127
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  test/e2e/storage/testsuites/volumemode.go:299

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 14:54:54.946: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  test/e2e/storage/testsuites/volumemode.go:299
May  9 14:54:55.697: INFO: Creating resource for dynamic PV
May  9 14:54:55.697: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-4619-e2e-scnq8pc
STEP: creating a claim
May  9 14:54:55.917: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com7f7b5] to have phase Bound
May  9 14:54:56.024: INFO: PersistentVolumeClaim test.csi.azure.com7f7b5 found but phase is Pending instead of Bound.
May  9 14:54:58.133: INFO: PersistentVolumeClaim test.csi.azure.com7f7b5 found but phase is Pending instead of Bound.
May  9 14:55:00.241: INFO: PersistentVolumeClaim test.csi.azure.com7f7b5 found and phase=Bound (4.324241842s)
STEP: Creating pod
STEP: Waiting for the pod to fail
May  9 14:55:02.892: INFO: Deleting pod "pod-5b2829f5-51ff-4783-a75a-32b9d6a396b9" in namespace "volumemode-4619"
May  9 14:55:03.002: INFO: Wait up to 5m0s for pod "pod-5b2829f5-51ff-4783-a75a-32b9d6a396b9" to be fully deleted
STEP: Deleting pvc
May  9 14:55:05.219: INFO: Deleting PersistentVolumeClaim "test.csi.azure.com7f7b5"
May  9 14:55:05.330: INFO: Waiting up to 5m0s for PersistentVolume pvc-74107c8c-ecde-4049-95a5-fbaaba36c7a2 to get deleted
May  9 14:55:05.438: INFO: PersistentVolume pvc-74107c8c-ecde-4049-95a5-fbaaba36c7a2 found and phase=Released (107.955353ms)
... skipping 32 lines ...

• [SLOW TEST:143.929 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/framework/testsuite.go:50
    should fail to use a volume in a pod with mismatched mode [Slow]
    test/e2e/storage/testsuites/volumemode.go:299
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":37,"completed":8,"skipped":477,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:57:18.944: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 212 lines ...
May  9 14:54:13.536: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May  9 14:54:13.651: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2k58d] to have phase Bound
May  9 14:54:13.760: INFO: PersistentVolumeClaim test.csi.azure.com2k58d found but phase is Pending instead of Bound.
May  9 14:54:15.869: INFO: PersistentVolumeClaim test.csi.azure.com2k58d found but phase is Pending instead of Bound.
May  9 14:54:17.980: INFO: PersistentVolumeClaim test.csi.azure.com2k58d found and phase=Bound (4.328615681s)
STEP: Creating pod to format volume volume-prep-provisioning-2785
May  9 14:54:18.310: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-2785" in namespace "provisioning-2785" to be "Succeeded or Failed"
May  9 14:54:18.418: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 108.518388ms
May  9 14:54:20.528: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218056297s
May  9 14:54:22.638: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327895141s
May  9 14:54:24.748: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437698813s
May  9 14:54:26.859: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548725659s
May  9 14:54:28.970: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659579024s
... skipping 8 lines ...
May  9 14:54:47.965: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 29.655101948s
May  9 14:54:50.081: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 31.770880795s
May  9 14:54:52.192: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 33.881635362s
May  9 14:54:54.301: INFO: Pod "volume-prep-provisioning-2785": Phase="Pending", Reason="", readiness=false. Elapsed: 35.990962225s
May  9 14:54:56.411: INFO: Pod "volume-prep-provisioning-2785": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.100892639s
STEP: Saw pod success
May  9 14:54:56.411: INFO: Pod "volume-prep-provisioning-2785" satisfied condition "Succeeded or Failed"
May  9 14:54:56.411: INFO: Deleting pod "volume-prep-provisioning-2785" in namespace "provisioning-2785"
May  9 14:54:56.535: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-2785" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-4hxh
STEP: Checking for subpath error in container status
May  9 14:56:18.978: INFO: Deleting pod "pod-subpath-test-dynamicpv-4hxh" in namespace "provisioning-2785"
May  9 14:56:19.094: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-4hxh" to be fully deleted
STEP: Deleting pod
May  9 14:56:19.202: INFO: Deleting pod "pod-subpath-test-dynamicpv-4hxh" in namespace "provisioning-2785"
STEP: Deleting pvc
May  9 14:56:19.312: INFO: Deleting PersistentVolumeClaim "test.csi.azure.com2k58d"
... skipping 37 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should verify container cannot write to subpath readonly volumes [Slow]
    test/e2e/storage/testsuites/subpath.go:425
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":27,"completed":6,"skipped":430,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:58:33.003: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 108 lines ...

    test/e2e/storage/framework/testsuite.go:127
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:280

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 14:57:03.073: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:280
May  9 14:57:03.829: INFO: Creating resource for dynamic PV
May  9 14:57:03.829: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6628-e2e-sc9sqqz
STEP: creating a claim
May  9 14:57:03.944: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May  9 14:57:04.055: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com4h5fv] to have phase Bound
May  9 14:57:04.162: INFO: PersistentVolumeClaim test.csi.azure.com4h5fv found but phase is Pending instead of Bound.
May  9 14:57:06.271: INFO: PersistentVolumeClaim test.csi.azure.com4h5fv found but phase is Pending instead of Bound.
May  9 14:57:08.382: INFO: PersistentVolumeClaim test.csi.azure.com4h5fv found and phase=Bound (4.32724659s)
STEP: Creating pod pod-subpath-test-dynamicpv-ptzn
STEP: Checking for subpath error in container status
May  9 14:58:20.933: INFO: Deleting pod "pod-subpath-test-dynamicpv-ptzn" in namespace "provisioning-6628"
May  9 14:58:21.043: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-ptzn" to be fully deleted
STEP: Deleting pod
May  9 14:58:23.260: INFO: Deleting pod "pod-subpath-test-dynamicpv-ptzn" in namespace "provisioning-6628"
STEP: Deleting pvc
May  9 14:58:23.368: INFO: Deleting PersistentVolumeClaim "test.csi.azure.com4h5fv"
... skipping 22 lines ...

• [SLOW TEST:152.572 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:280
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":43,"completed":9,"skipped":571,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:59:35.687: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 105 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    test/e2e/storage/testsuites/ephemeral.go:254
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":31,"completed":5,"skipped":355,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 14:59:42.265: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 237 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    test/e2e/storage/testsuites/provisioning.go:208
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":30,"completed":7,"skipped":372,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
... skipping 354 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:248
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":37,"completed":7,"skipped":360,"failed":0}

SSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should concurrently access the single volume from pods on the same node
  test/e2e/storage/testsuites/multivolume.go:298
... skipping 154 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:298
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":31,"completed":6,"skipped":505,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand
  test/e2e/storage/framework/testsuite.go:51
May  9 15:02:12.697: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 328 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:168
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":32,"completed":7,"skipped":555,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-stress
  test/e2e/storage/framework/testsuite.go:51
May  9 15:02:14.353: INFO: Driver test.csi.azure.com doesn't specify stress test options -- skipping
... skipping 38 lines ...
May  9 14:58:35.289: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2bwk4] to have phase Bound
May  9 14:58:35.397: INFO: PersistentVolumeClaim test.csi.azure.com2bwk4 found but phase is Pending instead of Bound.
May  9 14:58:37.505: INFO: PersistentVolumeClaim test.csi.azure.com2bwk4 found but phase is Pending instead of Bound.
May  9 14:58:39.614: INFO: PersistentVolumeClaim test.csi.azure.com2bwk4 found and phase=Bound (4.325181916s)
STEP: Creating pod pod-subpath-test-dynamicpv-6wlh
STEP: Creating a pod to test subpath
May  9 14:58:39.939: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6wlh" in namespace "provisioning-3744" to be "Succeeded or Failed"
May  9 14:58:40.047: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 107.836228ms
May  9 14:58:42.157: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2172719s
May  9 14:58:44.265: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325521254s
May  9 14:58:46.373: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433656273s
May  9 14:58:48.482: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542698532s
May  9 14:58:50.590: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650841432s
... skipping 14 lines ...
May  9 14:59:22.235: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 42.296070194s
May  9 14:59:24.344: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 44.40497071s
May  9 14:59:26.452: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 46.513261722s
May  9 14:59:28.560: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Pending", Reason="", readiness=false. Elapsed: 48.621025329s
May  9 14:59:30.669: INFO: Pod "pod-subpath-test-dynamicpv-6wlh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.730009808s
STEP: Saw pod success
May  9 14:59:30.669: INFO: Pod "pod-subpath-test-dynamicpv-6wlh" satisfied condition "Succeeded or Failed"
May  9 14:59:30.778: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000002 pod pod-subpath-test-dynamicpv-6wlh container test-container-volume-dynamicpv-6wlh: <nil>
STEP: delete the pod
May  9 14:59:31.031: INFO: Waiting for pod pod-subpath-test-dynamicpv-6wlh to disappear
May  9 14:59:31.139: INFO: Pod pod-subpath-test-dynamicpv-6wlh no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6wlh
May  9 14:59:31.139: INFO: Deleting pod "pod-subpath-test-dynamicpv-6wlh" in namespace "provisioning-3744"
... skipping 66 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support non-existent path
    test/e2e/storage/testsuites/subpath.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":27,"completed":7,"skipped":702,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on different node
  test/e2e/storage/testsuites/multivolume.go:168
... skipping 245 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on different node
    test/e2e/storage/testsuites/multivolume.go:168
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":37,"completed":9,"skipped":823,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 15:06:29.792: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  test/e2e/storage/testsuites/topology.go:194
May  9 15:06:30.542: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:188
May  9 15:06:30.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-651" for this suite.


S [SKIPPING] [0.979 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement]
    test/e2e/storage/testsuites/topology.go:194

    Driver didn't provide topology keys -- skipping

    test/e2e/storage/testsuites/topology.go:126
------------------------------
... skipping 22 lines ...
May  9 15:03:53.755: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com9qvr6] to have phase Bound
May  9 15:03:53.863: INFO: PersistentVolumeClaim test.csi.azure.com9qvr6 found but phase is Pending instead of Bound.
May  9 15:03:55.972: INFO: PersistentVolumeClaim test.csi.azure.com9qvr6 found but phase is Pending instead of Bound.
May  9 15:03:58.080: INFO: PersistentVolumeClaim test.csi.azure.com9qvr6 found and phase=Bound (4.32483314s)
STEP: Creating pod exec-volume-test-dynamicpv-4tdb
STEP: Creating a pod to test exec-volume-test
May  9 15:03:58.406: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-4tdb" in namespace "volume-123" to be "Succeeded or Failed"
May  9 15:03:58.513: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 107.585064ms
May  9 15:04:00.623: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217112418s
May  9 15:04:02.731: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325589841s
May  9 15:04:04.841: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435316431s
May  9 15:04:06.961: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555146057s
May  9 15:04:09.069: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663455033s
... skipping 56 lines ...
May  9 15:06:09.313: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.906912093s
May  9 15:06:11.425: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.019196568s
May  9 15:06:13.536: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.130338676s
May  9 15:06:15.645: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.239357078s
May  9 15:06:17.754: INFO: Pod "exec-volume-test-dynamicpv-4tdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m19.348420754s
STEP: Saw pod success
May  9 15:06:17.754: INFO: Pod "exec-volume-test-dynamicpv-4tdb" satisfied condition "Succeeded or Failed"
May  9 15:06:17.862: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000002 pod exec-volume-test-dynamicpv-4tdb container exec-container-dynamicpv-4tdb: <nil>
STEP: delete the pod
May  9 15:06:18.128: INFO: Waiting for pod exec-volume-test-dynamicpv-4tdb to disappear
May  9 15:06:18.235: INFO: Pod exec-volume-test-dynamicpv-4tdb no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-4tdb
May  9 15:06:18.235: INFO: Deleting pod "exec-volume-test-dynamicpv-4tdb" in namespace "volume-123"
... skipping 21 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":27,"completed":8,"skipped":758,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes 
  should allow exec of files on the volume
  test/e2e/storage/testsuites/volumes.go:198
... skipping 17 lines ...
May  9 15:02:15.373: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com486mx] to have phase Bound
May  9 15:02:15.482: INFO: PersistentVolumeClaim test.csi.azure.com486mx found but phase is Pending instead of Bound.
May  9 15:02:17.591: INFO: PersistentVolumeClaim test.csi.azure.com486mx found but phase is Pending instead of Bound.
May  9 15:02:19.700: INFO: PersistentVolumeClaim test.csi.azure.com486mx found and phase=Bound (4.326557129s)
STEP: Creating pod exec-volume-test-dynamicpv-5r7b
STEP: Creating a pod to test exec-volume-test
May  9 15:02:20.028: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-5r7b" in namespace "volume-3174" to be "Succeeded or Failed"
May  9 15:02:20.136: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 108.368918ms
May  9 15:02:22.247: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2188663s
May  9 15:02:24.357: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328896515s
May  9 15:02:26.468: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439752637s
May  9 15:02:28.577: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549172059s
May  9 15:02:30.686: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658253506s
... skipping 114 lines ...
May  9 15:06:33.372: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.34354183s
May  9 15:06:35.489: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.461417929s
May  9 15:06:37.598: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.5701871s
May  9 15:06:39.708: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4m19.679664532s
May  9 15:06:41.816: INFO: Pod "exec-volume-test-dynamicpv-5r7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4m21.788343525s
STEP: Saw pod success
May  9 15:06:41.816: INFO: Pod "exec-volume-test-dynamicpv-5r7b" satisfied condition "Succeeded or Failed"
May  9 15:06:41.925: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod exec-volume-test-dynamicpv-5r7b container exec-container-dynamicpv-5r7b: <nil>
STEP: delete the pod
May  9 15:06:42.155: INFO: Waiting for pod exec-volume-test-dynamicpv-5r7b to disappear
May  9 15:06:42.263: INFO: Pod exec-volume-test-dynamicpv-5r7b no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-5r7b
May  9 15:06:42.263: INFO: Deleting pod "exec-volume-test-dynamicpv-5r7b" in namespace "volume-3174"
... skipping 21 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should allow exec of files on the volume
    test/e2e/storage/testsuites/volumes.go:198
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":32,"completed":8,"skipped":634,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing single file [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:221
... skipping 17 lines ...
May  9 15:02:14.813: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comhjk8g] to have phase Bound
May  9 15:02:14.922: INFO: PersistentVolumeClaim test.csi.azure.comhjk8g found but phase is Pending instead of Bound.
May  9 15:02:17.033: INFO: PersistentVolumeClaim test.csi.azure.comhjk8g found but phase is Pending instead of Bound.
May  9 15:02:19.144: INFO: PersistentVolumeClaim test.csi.azure.comhjk8g found and phase=Bound (4.330435566s)
STEP: Creating pod pod-subpath-test-dynamicpv-b5f4
STEP: Creating a pod to test subpath
May  9 15:02:19.479: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-b5f4" in namespace "provisioning-3326" to be "Succeeded or Failed"
May  9 15:02:19.592: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 112.853518ms
May  9 15:02:21.701: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222123679s
May  9 15:02:23.811: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331358622s
May  9 15:02:25.921: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441635179s
May  9 15:02:28.031: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55170233s
May  9 15:02:30.142: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662467025s
... skipping 113 lines ...
May  9 15:06:30.797: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m11.317448622s
May  9 15:06:32.907: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.427975191s
May  9 15:06:35.019: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.539970877s
May  9 15:06:37.129: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.64985379s
May  9 15:06:39.247: INFO: Pod "pod-subpath-test-dynamicpv-b5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4m19.767226198s
STEP: Saw pod success
May  9 15:06:39.247: INFO: Pod "pod-subpath-test-dynamicpv-b5f4" satisfied condition "Succeeded or Failed"
May  9 15:06:39.356: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod pod-subpath-test-dynamicpv-b5f4 container test-container-subpath-dynamicpv-b5f4: <nil>
STEP: delete the pod
May  9 15:06:39.613: INFO: Waiting for pod pod-subpath-test-dynamicpv-b5f4 to disappear
May  9 15:06:39.722: INFO: Pod pod-subpath-test-dynamicpv-b5f4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-b5f4
May  9 15:06:39.722: INFO: Deleting pod "pod-subpath-test-dynamicpv-b5f4" in namespace "provisioning-3326"
... skipping 29 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support existing single file [LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:221
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":31,"completed":7,"skipped":693,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning 
  should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
  test/e2e/storage/testsuites/provisioning.go:208
... skipping 128 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/framework/testsuite.go:50
    should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
    test/e2e/storage/testsuites/provisioning.go:208
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":30,"completed":8,"skipped":565,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 15:07:56.092: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 248 lines ...
STEP: Deleting pod external-injector in namespace multivolume-3419
May  9 15:00:42.981: INFO: Waiting for pod external-injector to disappear
May  9 15:00:43.089: INFO: Pod external-injector still exists
May  9 15:00:45.090: INFO: Waiting for pod external-injector to disappear
May  9 15:00:45.198: INFO: Pod external-injector no longer exists
STEP: Creating pod1 with a volume on {Name: Selector:map[] Affinity:nil}
May  9 15:05:45.780: FAIL: Unexpected error:
    <*errors.errorString | 0xc0025a77f0>: {
        s: "pod \"pod-287e5575-0e00-491a-ab98-7f8c27d14791\" is not Running: timed out waiting for the condition",
    }
    pod "pod-287e5575-0e00-491a-ab98-7f8c27d14791" is not Running: timed out waiting for the condition
occurred

... skipping 39 lines ...
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:30 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" volumeMapPath "/var/lib/kubelet/pods/a2a70534-6870-4e67-ba6d-645bd69c2ef8/volumeDevices/kubernetes.io~csi"
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:31 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" in 296.818709ms
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:31 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-2"
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:31 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} Created: Created container external-injector
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:31 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} Started: Started container external-injector
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:42 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} Killing: Stopping container external-injector
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:44 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-35373899-vmss000001} FailedKillPod: error killing pod: failed to "KillContainer" for "external-injector" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: c158ce44828912bd95a648b5cd6c0c6ed89646ac984090273eb26804aba2c965"
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:45 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {default-scheduler } Scheduled: Successfully assigned multivolume-3419/pod-287e5575-0e00-491a-ab98-7f8c27d14791 to k8s-agentpool1-35373899-vmss000002
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:45 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" Volume is already exclusively attached to one node and can't be attached to another
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:45 +0000 UTC - event for test.csi.azure.comkrtxm-cloned: {test.csi.azure.com_k8s-agentpool1-35373899-vmss000002_9e2d1d56-a6cb-4cab-9bd2-c590c02403d3 } Provisioning: External provisioner is provisioning volume for claim "multivolume-3419/test.csi.azure.comkrtxm-cloned"
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:45 +0000 UTC - event for test.csi.azure.comkrtxm-cloned: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "test.csi.azure.com" or manually created by system administrator
May  9 15:08:05.742: INFO: At 2022-05-09 15:00:47 +0000 UTC - event for test.csi.azure.comkrtxm-cloned: {test.csi.azure.com_k8s-agentpool1-35373899-vmss000002_9e2d1d56-a6cb-4cab-9bd2-c590c02403d3 } ProvisioningSucceeded: Successfully provisioned volume pvc-59c3bd3e-6ab3-41de-bda0-9f6999d19f86
May  9 15:08:05.742: INFO: At 2022-05-09 15:02:00 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" : rpc error: code = Unknown desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3 to instance k8s-agentpool1-35373899-vmss000002 failed with newAvailabilitySetNodesCache: failed to list vms in the resource group kubetest-rxirza6l: Retriable: true, RetryAfter: 180s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMList with reason "client throttled"
May  9 15:08:05.742: INFO: At 2022-05-09 15:02:05 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" : rpc error: code = Unknown desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3 to instance k8s-agentpool1-35373899-vmss000002 failed with newAvailabilitySetNodesCache: failed to list vms in the resource group kubetest-rxirza6l: Retriable: true, RetryAfter: 176s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMList with reason "client throttled"
May  9 15:08:05.742: INFO: At 2022-05-09 15:02:18 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" : rpc error: code = Unknown desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3 to instance k8s-agentpool1-35373899-vmss000002 failed with newAvailabilitySetNodesCache: failed to list vms in the resource group kubetest-rxirza6l: Retriable: true, RetryAfter: 168s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMList with reason "client throttled"
May  9 15:08:05.742: INFO: At 2022-05-09 15:02:35 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" : rpc error: code = Unknown desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3 to instance k8s-agentpool1-35373899-vmss000002 failed with newAvailabilitySetNodesCache: failed to list vms in the resource group kubetest-rxirza6l: Retriable: true, RetryAfter: 152s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMList with reason "client throttled"
May  9 15:08:05.742: INFO: At 2022-05-09 15:02:48 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {kubelet k8s-agentpool1-35373899-vmss000002} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[kube-api-access-29hvd volume1]: timed out waiting for the condition
May  9 15:08:05.742: INFO: At 2022-05-09 15:03:08 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" : rpc error: code = Unknown desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3 to instance k8s-agentpool1-35373899-vmss000002 failed with newAvailabilitySetNodesCache: failed to list vms in the resource group kubetest-rxirza6l: Retriable: true, RetryAfter: 120s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMList with reason "client throttled"
May  9 15:08:05.742: INFO: At 2022-05-09 15:04:12 +0000 UTC - event for pod-287e5575-0e00-491a-ab98-7f8c27d14791: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3" : rpc error: code = Unknown desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-13baa3f0-0523-4c3a-a7a2-3611212fadf3 to instance k8s-agentpool1-35373899-vmss000002 failed with newAvailabilitySetNodesCache: failed to list vms in the resource group kubetest-rxirza6l: Retriable: true, RetryAfter: 56s, HTTPStatusCode: 0, RawError: azure cloud provider throttled for operation VMList with reason "client throttled"
May  9 15:08:05.849: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
May  9 15:08:05.849: INFO: 
May  9 15:08:06.064: INFO: 
Logging node info for node k8s-agentpool1-35373899-vmss000000
May  9 15:08:06.173: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-35373899-vmss000000    e8c6f5b2-e704-4d56-bc90-8d5ad028b3d6 17207 0 2022-05-09 14:21:46 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D8s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-rxirza6l kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-35373899-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D8s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:0 topology.test.csi.azure.com/zone:] map[csi.volume.kubernetes.io/nodeid:{"test.csi.azure.com":"k8s-agentpool1-35373899-vmss000000"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2022-05-09 14:21:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:agentpool":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.azure.com/cluster":{},"f:kubernetes.azure.com/role":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:storageprofile":{},"f:storagetier":{}}}} } {kubectl-label Update v1 2022-05-09 14:21:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/agent":{}}}} } {cloud-controller-manager Update v1 2022-05-09 14:22:01 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {cloud-controller-manager Update v1 2022-05-09 14:22:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-05-09 14:22:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {cloud-node-manager Update v1 2022-05-09 14:23:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-05-09 15:08:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.test.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-35373899-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{31036686336 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{33672699904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{27933017657 0} {<nil>} 27933017657 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32886267904 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-09 14:22:49 +0000 UTC,LastTransitionTime:2022-05-09 14:22:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-09 15:08:05 +0000 UTC,LastTransitionTime:2022-05-09 14:21:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-09 15:08:05 +0000 UTC,LastTransitionTime:2022-05-09 14:21:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-09 15:08:05 +0000 UTC,LastTransitionTime:2022-05-09 14:21:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-09 15:08:05 +0000 UTC,LastTransitionTime:2022-05-09 14:22:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool1-35373899-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d2137330a2a840b88496511f5742d369,SystemUUID:f069ba38-f1c9-ad4a-9746-e0dbc6f29e39,BootID:b4c6af41-6bd3-4161-9808-f5a33191d988,KernelVersion:5.4.0-1074-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:docker://20.10.11+azure-3,KubeletVersion:v1.23.5,KubeProxyVersion:v1.23.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253346057,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:423eb6cf602c064c8b2deefead5ceadd6324ed41b3d995dab5d0f6f0f4d4710f mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.10.0],SizeBytes:245959792,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi@sha256:9e2ecabcf9dd9943e6600eb9fb460f45b4dc61af7cabe95d115082a029db2aaf mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.9.0],SizeBytes:230470852,},ContainerImage{Names:[k8sprow.azurecr.io/azuredisk-csi@sha256:521010cea5eada09e7e99292b435ae3424cb22fde6383a9905c7345b80a66a37 k8sprow.azurecr.io/azuredisk-csi:v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d],SizeBytes:220737848,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/azure-npm@sha256:106f669f48e5e80c4ec0afb49858ead72cf4b901cd8664e7bf81f8d789e56e12 mcr.microsoft.com/containernetworking/azure-npm:v1.2.2_hotfix],SizeBytes:175230380,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver@sha256:c0d040a1c4fbfceb65663e31c09ea40f4f78e356437610cbc3fbb4bb409bd6f1 mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver:v0.0.19],SizeBytes:123229697,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/kube-proxy@sha256:dda03e3dfbc9ff8d291006772d223f5b53f6cc7390b12ca4f7cfca3bfff4097c mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.23.5],SizeBytes:112316543,},ContainerImage{Names:[mcr.microsoft.com/oss/azure/secrets-store/provider-azure@sha256:6f67f3d0c7cdde5702f8ce7f101b6519daa0237f0c34fecb7c058b6af8c22ad1 mcr.microsoft.com/oss/azure/secrets-store/provider-azure:0.0.12],SizeBytes:101061355,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-autoscaler@sha256:6f0c680d375c62e74351f8ff3ed6ddb9b72ca759e0645c329b95f64264654a6d mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-autoscaler:v1.22.1],SizeBytes:99962810,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/kube-addon-manager@sha256:32e2836018c96e73533bd4642fe438e465b81dcbfa8b7b61935a6f4d0246c7ae mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.1.3],SizeBytes:86832059,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/kube-addon-manager@sha256:92c2c5aad9012ee32d2a43a74966cc0adc6ccb1705ad15abb10485ecf406d88b mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.1.5],SizeBytes:84094027,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/metrics-server@sha256:1ef9d57ce41ffcc328b92494c3bfafe401e0b9a1694a295301a1385337d52815 mcr.microsoft.com/oss/kubernetes/metrics-server:v0.5.2],SizeBytes:64327621,},ContainerImage{Names:[mcr.microsoft.com/oss/nvidia/k8s-device-plugin@sha256:0f5b52bf28239234e831697d96db63ac03cde70fe68058f964504ab7564ee810 mcr.microsoft.com/oss/nvidia/k8s-device-plugin:1.0.0-beta6],SizeBytes:64160241,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:e9ddadc44ba87a4a27f67e54760a14f9986885b534b3dff170a14eae1e35d213 mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.0.0],SizeBytes:56881280,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:c5bb71ceaac60b1a4b58739fa07b709f6248c452ff6272a384d2f7648895a750 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.3.0],SizeBytes:54313772,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:61849a026511cf332c87d73d0a7aed803b510c3ede197ec755389686d490de72 mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v4.2.1],SizeBytes:54210936,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:6b41e7153ebdfdc1501aa65184624bc15fd33a52d93f88ec3a758d0f8c9b8c10 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.3.0],SizeBytes:53842561,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:8c3fc3c2667004ad6bbdf723bb64c5da66a5cb8b11d4ee59b67179b686223b13 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1],SizeBytes:52732401,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:be5a8dc1f990828f653c77bd0a0f1bbd13197c3019f6e1d99d590389bac36705 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v4.2.1],SizeBytes:51575245,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:0ad67f9919522a07318034641ae09bf2079b417e9944c65914410594ce645468 mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v1.1.4],SizeBytes:51478397,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:31ec4f7daccd3e7a8504e12657d7830651ecacbe4a487daca1b1b7695a64b070 mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v1.23.1],SizeBytes:51249021,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:011712ed90fb8efcf27928b0a47ed04b98baebb31cb1b2d8ab676977ec18eedc mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.1.4],SizeBytes:50868093,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:0f9a8fbaed65192ed7dd795be4f9c1dc48ebdef0a241fb62d456f4bed40d9875 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.23.1],SizeBytes:50679677,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/ip-masq-agent@sha256:1244155f2ed3f33ff154cc343b8ad285f3391d95afd7d4b1c6dcc420bc0ba3cf mcr.microsoft.com/oss/kubernetes/ip-masq-agent:v2.5.0],SizeBytes:50146762,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:7c907ff70b90a0bdf8fae63bd744018469dd9839cde1fd0515b93e0bbd14b34e mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v1.0.8],SizeBytes:48963453,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:3987d7a8c6922ce1952ee19c5cb6ea75aac7b7c1b07aa79277ad038c69fb7a31 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.0.8],SizeBytes:48349053,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/coredns@sha256:f873bf7f0928461efe10697fa76cf0ad7a1ae3041c5b57b50dd3d0b72d273f8c mcr.microsoft.com/oss/kubernetes/coredns:1.8.6],SizeBytes:46804601,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:8073113a20882642a980b338635cdc5945e5673a18aef192090e6fde2b89a75c mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v0.6.0],SizeBytes:45909032,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:6a32329628bdea3c6d75e98aad6155b65d2e2b98ca616eb33f9ac562912804c6 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v0.6.0],SizeBytes:45229096,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:ef6c4ba564b4d11d270f7d1563c50fbeb30ccc3b94146e5059228c49f95875f5 mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v0.7.11],SizeBytes:44916605,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:dbcc384758ba5ca6d249596d471292ed3785e31cdb854d48b84d70794b669b4c mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v0.7.11],SizeBytes:43679613,},ContainerImage{Names:[mcr.microsoft.com/oss/etcd-io/etcd@sha256:cf587862e3f1b6fa4d9a2565520a34f164bdf72c50f37af8c3c668160593246e mcr.microsoft.com/oss/etcd-io/etcd:v3.3.25],SizeBytes:41832119,},ContainerImage{Names:[mcr.microsoft.com/k8s/aad-pod-identity/mic@sha256:bd9465be94966b9a917e1e3904fa5e63dd91772ccadf304e18ffd8e4ad8ccedd mcr.microsoft.com/k8s/aad-pod-identity/mic:1.6.1],SizeBytes:41374894,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler@sha256:c849d75d61943ce7f51b4c049f1a79d19a08253966c8f49c4cfb6414cc33db8b mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5],SizeBytes:40661903,},ContainerImage{Names:[mcr.microsoft.com/k8s/aad-pod-identity/nmi@sha256:02128fefcdb7593ac53fc342e2c53a0fc6fabd813036bf60457bf43cc2940116 mcr.microsoft.com/k8s/aad-pod-identity/nmi:1.6.1],SizeBytes:38007982,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:921f301c44dda06a325164accf22e78ecc570b5c7d9d6ee4c66bd8cbb2b60b9a mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.16],SizeBytes:26970670,},ContainerImage{Names:[mcr.microsoft.com/k8s/kms/keyvault@sha256:1a27e175f8c125209e32d2957b5509fe20757bd8cb309ff9da598799b56326fb mcr.microsoft.com/k8s/kms/keyvault:v0.0.10],SizeBytes:23077387,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:348b2d4eebc8da38687755a69b6c21035be232325a6bcde54e5ec4e04689fd93 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.0],SizeBytes:19581025,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:dbec3a8166686b09b242176ab5b99e993da4126438bbce68147c3fd654f35662 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.4.0],SizeBytes:19547289,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:e01f5dae19d7e1be536606fe5deb893417429486b628b816d80ffa0e441eeae8 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.6.0],SizeBytes:17614587,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:c96a6255c42766f6b8bb1a7cda02b0060ab1b20b2e2dafcc64ec09e7646745a6 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0],SizeBytes:17573341,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16028126,},ContainerImage{Names:[mcr.microsoft.com/oss/busybox/busybox@sha256:582a641242b49809af3a1a522f9aae8c3f047d1c6ca1dd9d8cdabd349e45b1a9 mcr.microsoft.com/oss/busybox/busybox:1.33.1],SizeBytes:1235829,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/pause@sha256:e3b8c20681593c21b344ad801fbb8abaf564427ee3a57a9fcfa3b455f917ce46 mcr.microsoft.com/oss/kubernetes/pause:3.4.1],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/test.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d0ba5542-6709-49e3-b39c-7f0ae172d253],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  9 15:08:06.174: INFO: 
... skipping 118 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [Measurement]
    test/e2e/storage/testsuites/multivolume.go:378

    May  9 15:05:45.780: Unexpected error:
        <*errors.errorString | 0xc0025a77f0>: {
            s: "pod \"pod-287e5575-0e00-491a-ab98-7f8c27d14791\" is not Running: timed out waiting for the condition",
        }
        pod "pod-287e5575-0e00-491a-ab98-7f8c27d14791" is not Running: timed out waiting for the condition
    occurred

    test/e2e/storage/testsuites/multivolume.go:696
------------------------------
{"msg":"FAILED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":43,"completed":9,"skipped":673,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 15:08:10.022: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  test/e2e/storage/testsuites/topology.go:194
May  9 15:08:10.773: INFO: Driver didn't provide topology keys -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:188
May  9 15:08:10.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "topology-4223" for this suite.


S [SKIPPING] [1.077 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/framework/testsuite.go:50
    should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement]
    test/e2e/storage/testsuites/topology.go:194

    Driver didn't provide topology keys -- skipping

    test/e2e/storage/testsuites/topology.go:126
------------------------------
... skipping 18 lines ...

    test/e2e/storage/framework/testsuite.go:127
------------------------------
SSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:269

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 15:07:24.095: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:269
May  9 15:07:24.856: INFO: Creating resource for dynamic PV
May  9 15:07:24.856: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9997-e2e-scmh4dj
STEP: creating a claim
May  9 15:07:24.965: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May  9 15:07:25.077: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comxh2h5] to have phase Bound
May  9 15:07:25.185: INFO: PersistentVolumeClaim test.csi.azure.comxh2h5 found but phase is Pending instead of Bound.
May  9 15:07:27.295: INFO: PersistentVolumeClaim test.csi.azure.comxh2h5 found but phase is Pending instead of Bound.
May  9 15:07:29.404: INFO: PersistentVolumeClaim test.csi.azure.comxh2h5 found and phase=Bound (4.326959499s)
STEP: Creating pod pod-subpath-test-dynamicpv-vdb9
STEP: Checking for subpath error in container status
May  9 15:08:01.952: INFO: Deleting pod "pod-subpath-test-dynamicpv-vdb9" in namespace "provisioning-9997"
May  9 15:08:02.062: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-vdb9" to be fully deleted
STEP: Deleting pod
May  9 15:08:04.283: INFO: Deleting pod "pod-subpath-test-dynamicpv-vdb9" in namespace "provisioning-9997"
STEP: Deleting pvc
May  9 15:08:04.391: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comxh2h5"
... skipping 22 lines ...

• [SLOW TEST:112.607 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:269
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":32,"completed":9,"skipped":712,"failed":0}

SSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support restarting containers using file as subpath [Slow][LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:333
... skipping 65 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support restarting containers using file as subpath [Slow][LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:333
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":37,"completed":10,"skipped":861,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:09:24.076: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 85 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    test/e2e/storage/testsuites/ephemeral.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":27,"completed":9,"skipped":776,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  
  should check snapshot fields, check restore correctly works, check deletion (ephemeral)
  test/e2e/storage/testsuites/snapshottable.go:177
... skipping 10 lines ...
[It] should check snapshot fields, check restore correctly works, check deletion (ephemeral)
  test/e2e/storage/testsuites/snapshottable.go:177
May  9 15:07:57.845: INFO: Creating resource for dynamic PV
May  9 15:07:57.845: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} 
STEP: creating a StorageClass snapshotting-3506-e2e-sc88h4g
STEP: [init] starting a pod to use the claim
May  9 15:07:58.070: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-g2q8z" in namespace "snapshotting-3506" to be "Succeeded or Failed"
May  9 15:07:58.177: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 107.378697ms
May  9 15:08:00.286: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215704559s
May  9 15:08:02.393: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323613526s
May  9 15:08:04.501: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431054846s
May  9 15:08:06.611: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541210126s
May  9 15:08:08.720: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650595535s
... skipping 8 lines ...
May  9 15:08:27.722: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 29.651806569s
May  9 15:08:29.832: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 31.761896529s
May  9 15:08:31.940: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 33.870363733s
May  9 15:08:34.049: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Pending", Reason="", readiness=false. Elapsed: 35.97888047s
May  9 15:08:36.158: INFO: Pod "pvc-snapshottable-tester-g2q8z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.088423788s
STEP: Saw pod success
May  9 15:08:36.158: INFO: Pod "pvc-snapshottable-tester-g2q8z" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
STEP: creating a SnapshotClass
STEP: creating a dynamic VolumeSnapshot
May  9 15:08:36.597: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-pz88v to become ready
May  9 15:08:36.705: INFO: VolumeSnapshot snapshot-pz88v found but is not ready.
May  9 15:08:38.814: INFO: VolumeSnapshot snapshot-pz88v found but is not ready.
... skipping 49 lines ...
    test/e2e/storage/testsuites/snapshottable.go:113
      
      test/e2e/storage/testsuites/snapshottable.go:176
        should check snapshot fields, check restore correctly works, check deletion (ephemeral)
        test/e2e/storage/testsuites/snapshottable.go:177
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":30,"completed":9,"skipped":730,"failed":0}

SSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support file as subpath [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:232

{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":37,"completed":8,"skipped":367,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:187
STEP: Creating a kubernetes client
May  9 15:07:58.898: INFO: >>> kubeConfig: /root/tmp487086944/kubeconfig/kubeconfig.westeurope.json
... skipping 10 lines ...
May  9 15:07:59.876: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comvk78h] to have phase Bound
May  9 15:07:59.987: INFO: PersistentVolumeClaim test.csi.azure.comvk78h found but phase is Pending instead of Bound.
May  9 15:08:02.098: INFO: PersistentVolumeClaim test.csi.azure.comvk78h found but phase is Pending instead of Bound.
May  9 15:08:04.207: INFO: PersistentVolumeClaim test.csi.azure.comvk78h found and phase=Bound (4.330945523s)
STEP: Creating pod pod-subpath-test-dynamicpv-jlv4
STEP: Creating a pod to test atomic-volume-subpath
May  9 15:08:04.532: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jlv4" in namespace "provisioning-9527" to be "Succeeded or Failed"
May  9 15:08:04.641: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Pending", Reason="", readiness=false. Elapsed: 109.074949ms
May  9 15:08:06.752: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219836239s
May  9 15:08:08.861: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328808249s
May  9 15:08:10.971: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438868165s
May  9 15:08:13.081: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54891503s
May  9 15:08:15.192: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660290212s
... skipping 15 lines ...
May  9 15:08:48.955: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Running", Reason="", readiness=true. Elapsed: 44.423603821s
May  9 15:08:51.066: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Running", Reason="", readiness=true. Elapsed: 46.533991207s
May  9 15:08:53.175: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Running", Reason="", readiness=true. Elapsed: 48.643772554s
May  9 15:08:55.292: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Running", Reason="", readiness=true. Elapsed: 50.76003804s
May  9 15:08:57.402: INFO: Pod "pod-subpath-test-dynamicpv-jlv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.870093691s
STEP: Saw pod success
May  9 15:08:57.402: INFO: Pod "pod-subpath-test-dynamicpv-jlv4" satisfied condition "Succeeded or Failed"
May  9 15:08:57.510: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod pod-subpath-test-dynamicpv-jlv4 container test-container-subpath-dynamicpv-jlv4: <nil>
STEP: delete the pod
May  9 15:08:57.740: INFO: Waiting for pod pod-subpath-test-dynamicpv-jlv4 to disappear
May  9 15:08:57.848: INFO: Pod pod-subpath-test-dynamicpv-jlv4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jlv4
May  9 15:08:57.848: INFO: Deleting pod "pod-subpath-test-dynamicpv-jlv4" in namespace "provisioning-9527"
... skipping 41 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support file as subpath [LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:232
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":37,"completed":9,"skipped":367,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 15:11:11.729: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 66 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:269

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 102 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:50
    (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
    test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":43,"completed":10,"skipped":766,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning
  test/e2e/storage/framework/testsuite.go:51
May  9 15:11:45.470: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 152 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    test/e2e/storage/testsuites/multivolume.go:323
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":37,"completed":11,"skipped":999,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource]
  test/e2e/storage/framework/testsuite.go:51
May  9 15:12:41.839: INFO: Driver test.csi.azure.com doesn't specify snapshot stress test options -- skipping
... skipping 151 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    test/e2e/storage/testsuites/multivolume.go:323
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":31,"completed":8,"skipped":723,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:13:02.483: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 37 lines ...
  test/e2e/framework/framework.go:188
May  9 15:13:03.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumelimits-1634" for this suite.

•
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":31,"completed":9,"skipped":905,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 15:13:04.077: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 120 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    test/e2e/storage/testsuites/multivolume.go:378
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":27,"completed":10,"skipped":802,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes 
  should store data
  test/e2e/storage/testsuites/volumes.go:161
... skipping 116 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should store data
    test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":43,"completed":11,"skipped":791,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]}

SSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes 
  should store data
  test/e2e/storage/testsuites/volumes.go:161
... skipping 151 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] volumes
  test/e2e/storage/framework/testsuite.go:50
    should store data
    test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data","total":37,"completed":10,"skipped":460,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/framework/testsuite.go:51
May  9 15:17:25.263: INFO: Driver "test.csi.azure.com" does not support volume expansion - skipping
... skipping 24 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:242

    Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 214 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/framework/testsuite.go:50
    should store data
    test/e2e/storage/testsuites/volumes.go:161
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should store data","total":31,"completed":10,"skipped":912,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:18:03.430: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 73 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    test/e2e/storage/testsuites/ephemeral.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":43,"completed":12,"skipped":794,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] 
  should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
  test/e2e/storage/testsuites/multivolume.go:323
... skipping 102 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    test/e2e/storage/testsuites/multivolume.go:323
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":30,"completed":10,"skipped":741,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:19:03.154: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 80 lines ...
May  9 15:14:44.853: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comx4b29] to have phase Bound
May  9 15:14:44.965: INFO: PersistentVolumeClaim test.csi.azure.comx4b29 found but phase is Pending instead of Bound.
May  9 15:14:47.073: INFO: PersistentVolumeClaim test.csi.azure.comx4b29 found but phase is Pending instead of Bound.
May  9 15:14:49.180: INFO: PersistentVolumeClaim test.csi.azure.comx4b29 found and phase=Bound (4.3269111s)
STEP: [init] starting a pod to use the claim
STEP: [init] check pod success
May  9 15:14:49.612: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-8cpvj" in namespace "snapshotting-902" to be "Succeeded or Failed"
May  9 15:14:49.719: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 107.269516ms
May  9 15:14:51.827: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215260697s
May  9 15:14:53.942: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329881428s
May  9 15:14:56.051: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438736423s
May  9 15:14:58.161: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548843257s
May  9 15:15:00.271: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658509578s
... skipping 9 lines ...
May  9 15:15:21.359: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 31.746786969s
May  9 15:15:23.467: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.85508293s
May  9 15:15:25.576: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 35.963541921s
May  9 15:15:27.684: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Pending", Reason="", readiness=false. Elapsed: 38.072254382s
May  9 15:15:29.792: INFO: Pod "pvc-snapshottable-tester-8cpvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.17975741s
STEP: Saw pod success
May  9 15:15:29.792: INFO: Pod "pvc-snapshottable-tester-8cpvj" satisfied condition "Succeeded or Failed"
STEP: [init] checking the claim
May  9 15:15:29.899: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comx4b29] to have phase Bound
May  9 15:15:30.006: INFO: PersistentVolumeClaim test.csi.azure.comx4b29 found and phase=Bound (107.023275ms)
STEP: [init] checking the PV
STEP: [init] deleting the pod
May  9 15:15:30.360: INFO: Pod pvc-snapshottable-tester-8cpvj has the following logs: 
... skipping 37 lines ...
May  9 15:15:47.625: INFO: WaitUntil finished successfully after 107.998982ms
STEP: getting the snapshot and snapshot content
STEP: checking the snapshot
STEP: checking the SnapshotContent
STEP: Modifying source data test
STEP: modifying the data in the source PVC
May  9 15:15:48.171: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-bv89x" in namespace "snapshotting-902" to be "Succeeded or Failed"
May  9 15:15:48.279: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 107.947396ms
May  9 15:15:50.389: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21767141s
May  9 15:15:52.499: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32738937s
May  9 15:15:54.608: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436719493s
May  9 15:15:56.717: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545624774s
May  9 15:15:58.826: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654782507s
... skipping 61 lines ...
May  9 15:18:09.581: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.409887994s
May  9 15:18:11.689: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.517954882s
May  9 15:18:13.799: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.627280868s
May  9 15:18:15.907: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Pending", Reason="", readiness=false. Elapsed: 2m27.73568651s
May  9 15:18:18.019: INFO: Pod "pvc-snapshottable-data-tester-bv89x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m29.847367298s
STEP: Saw pod success
May  9 15:18:18.019: INFO: Pod "pvc-snapshottable-data-tester-bv89x" satisfied condition "Succeeded or Failed"
May  9 15:18:18.263: INFO: Pod pvc-snapshottable-data-tester-bv89x has the following logs: 
May  9 15:18:18.263: INFO: Deleting pod "pvc-snapshottable-data-tester-bv89x" in namespace "snapshotting-902"
May  9 15:18:18.379: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-bv89x" to be fully deleted
STEP: creating a pvc from the snapshot
STEP: starting a pod to use the snapshot
May  9 15:18:52.932: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rxirza6l.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp487086944/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-902 exec restored-pvc-tester-7mbhv --namespace=snapshotting-902 -- cat /mnt/test/data'
... skipping 33 lines ...
May  9 15:19:19.214: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
May  9 15:19:20.323: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
May  9 15:19:21.432: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
May  9 15:19:22.542: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
May  9 15:19:23.651: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
May  9 15:19:24.760: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
May  9 15:19:25.760: INFO: WaitUntil failed after reaching the timeout 30s
[AfterEach] volume snapshot controller
  test/e2e/storage/testsuites/snapshottable.go:172
May  9 15:19:25.868: INFO: Error getting logs for pod restored-pvc-tester-7mbhv: the server could not find the requested resource (get pods restored-pvc-tester-7mbhv)
May  9 15:19:25.868: INFO: Deleting pod "restored-pvc-tester-7mbhv" in namespace "snapshotting-902"
May  9 15:19:25.976: INFO: deleting claim "snapshotting-902"/"pvc-xc7tn"
May  9 15:19:26.084: INFO: deleting snapshot "snapshotting-902"/"pre-provisioned-snapshot-3a4def29-ec48-4321-b7cc-29f6f03ac351"
May  9 15:19:26.192: INFO: deleting snapshot content "pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351"
May  9 15:19:26.525: INFO: Waiting up to 5m0s for volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 to be deleted
May  9 15:19:26.633: INFO: volumesnapshotcontents pre-provisioned-snapcontent-3a4def29-ec48-4321-b7cc-29f6f03ac351 has been found and is not deleted
... skipping 27 lines ...
    test/e2e/storage/testsuites/snapshottable.go:113
      
      test/e2e/storage/testsuites/snapshottable.go:176
        should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
        test/e2e/storage/testsuites/snapshottable.go:278
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller  should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":27,"completed":11,"skipped":867,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
... skipping 136 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should create read-only inline ephemeral volume
    test/e2e/storage/testsuites/ephemeral.go:175
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":37,"completed":11,"skipped":608,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 15:20:04.143: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 156 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits
  test/e2e/storage/framework/testsuite.go:50
    should support volume limits [Serial]
    test/e2e/storage/testsuites/volumelimits.go:127
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]","total":32,"completed":10,"skipped":719,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:20:21.797: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 72 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should be able to unmount after the subpath directory is deleted [LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:447
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":30,"completed":11,"skipped":912,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:51
May  9 15:20:38.523: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 129 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should create read/write inline ephemeral volume
    test/e2e/storage/testsuites/ephemeral.go:196
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":43,"completed":13,"skipped":821,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 15:21:05.194: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 134 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  test/e2e/storage/framework/testsuite.go:50
    (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
    test/e2e/storage/testsuites/fsgroupchangepolicy.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":31,"completed":11,"skipped":959,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:21:18.174: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 48 lines ...
May  9 15:20:22.829: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comnqjc8] to have phase Bound
May  9 15:20:22.938: INFO: PersistentVolumeClaim test.csi.azure.comnqjc8 found but phase is Pending instead of Bound.
May  9 15:20:25.048: INFO: PersistentVolumeClaim test.csi.azure.comnqjc8 found but phase is Pending instead of Bound.
May  9 15:20:27.158: INFO: PersistentVolumeClaim test.csi.azure.comnqjc8 found and phase=Bound (4.328740143s)
STEP: Creating pod pod-subpath-test-dynamicpv-2hcs
STEP: Creating a pod to test subpath
May  9 15:20:27.485: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2hcs" in namespace "provisioning-4011" to be "Succeeded or Failed"
May  9 15:20:27.599: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 113.450486ms
May  9 15:20:29.710: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225062231s
May  9 15:20:31.821: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335891925s
May  9 15:20:33.930: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445225847s
May  9 15:20:36.040: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554935898s
May  9 15:20:38.150: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664774086s
... skipping 8 lines ...
May  9 15:20:57.158: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 29.672686848s
May  9 15:20:59.268: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.782503316s
May  9 15:21:01.378: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 33.893170834s
May  9 15:21:03.500: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Pending", Reason="", readiness=false. Elapsed: 36.014347021s
May  9 15:21:05.609: INFO: Pod "pod-subpath-test-dynamicpv-2hcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.124061727s
STEP: Saw pod success
May  9 15:21:05.609: INFO: Pod "pod-subpath-test-dynamicpv-2hcs" satisfied condition "Succeeded or Failed"
May  9 15:21:05.718: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000002 pod pod-subpath-test-dynamicpv-2hcs container test-container-subpath-dynamicpv-2hcs: <nil>
STEP: delete the pod
May  9 15:21:06.003: INFO: Waiting for pod pod-subpath-test-dynamicpv-2hcs to disappear
May  9 15:21:06.112: INFO: Pod pod-subpath-test-dynamicpv-2hcs no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-2hcs
May  9 15:21:06.112: INFO: Deleting pod "pod-subpath-test-dynamicpv-2hcs" in namespace "provisioning-4011"
... skipping 29 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support readOnly directory specified in the volumeMount
    test/e2e/storage/testsuites/subpath.go:367
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":32,"completed":11,"skipped":822,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral 
  should support two pods which have the same volume definition
  test/e2e/storage/testsuites/ephemeral.go:216
... skipping 61 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should support two pods which have the same volume definition
    test/e2e/storage/testsuites/ephemeral.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition","total":37,"completed":12,"skipped":618,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] 
  should access to two volumes with the same volume mode and retain data across pod recreation on the same node
  test/e2e/storage/testsuites/multivolume.go:138
... skipping 194 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with the same volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:138
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":43,"completed":14,"skipped":889,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:24:06.596: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:280

    Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 117 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:423
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":32,"completed":12,"skipped":871,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should concurrently access the single read-only volume from pods on the same node
  test/e2e/storage/testsuites/multivolume.go:423
... skipping 82 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the single read-only volume from pods on the same node
    test/e2e/storage/testsuites/multivolume.go:423
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":37,"completed":13,"skipped":643,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 15:25:42.083: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 104 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should support multiple inline ephemeral volumes
    test/e2e/storage/testsuites/ephemeral.go:254
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":32,"completed":13,"skipped":893,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
May  9 15:26:58.707: INFO: Driver "test.csi.azure.com" does not support volume type "CSIInlineVolume" - skipping
... skipping 137 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/framework/testsuite.go:50
    should provision storage with pvc data source
    test/e2e/storage/testsuites/provisioning.go:421
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":37,"completed":14,"skipped":804,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:51
May  9 15:28:20.218: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 3 lines ...

S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds]
External Storage [Driver: test.csi.azure.com]
test/e2e/storage/external/external.go:174
  [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
    test/e2e/storage/testsuites/subpath.go:269

    Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping

    test/e2e/storage/external/external.go:262
------------------------------
... skipping 75 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  test/e2e/storage/framework/testsuite.go:50
    should support two pods which have the same volume definition
    test/e2e/storage/testsuites/ephemeral.go:216
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition","total":32,"completed":14,"skipped":914,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] 
  should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
  test/e2e/storage/testsuites/multivolume.go:378
... skipping 87 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (ext4)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
    test/e2e/storage/testsuites/multivolume.go:378
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":37,"completed":15,"skipped":817,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  test/e2e/storage/framework/testsuite.go:51
May  9 15:30:41.889: INFO: Driver "test.csi.azure.com" does not support volume type "CSIInlineVolume" - skipping
... skipping 246 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow]
  test/e2e/storage/framework/testsuite.go:50
    should access to two volumes with different volume mode and retain data across pod recreation on the same node
    test/e2e/storage/testsuites/multivolume.go:209
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":32,"completed":15,"skipped":936,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes
  test/e2e/storage/framework/testsuite.go:51
May  9 15:34:15.377: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping
... skipping 38 lines ...
May  9 15:34:16.371: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com7qpx8] to have phase Bound
May  9 15:34:16.480: INFO: PersistentVolumeClaim test.csi.azure.com7qpx8 found but phase is Pending instead of Bound.
May  9 15:34:18.590: INFO: PersistentVolumeClaim test.csi.azure.com7qpx8 found but phase is Pending instead of Bound.
May  9 15:34:20.699: INFO: PersistentVolumeClaim test.csi.azure.com7qpx8 found and phase=Bound (4.327450438s)
STEP: Creating pod pod-subpath-test-dynamicpv-7w5b
STEP: Creating a pod to test subpath
May  9 15:34:21.028: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7w5b" in namespace "provisioning-6205" to be "Succeeded or Failed"
May  9 15:34:21.136: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 108.154637ms
May  9 15:34:23.246: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218380589s
May  9 15:34:25.357: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329265315s
May  9 15:34:27.466: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438595419s
May  9 15:34:29.576: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548580537s
May  9 15:34:31.688: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660061368s
May  9 15:34:33.798: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.770641426s
May  9 15:34:35.908: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.880283332s
May  9 15:34:38.019: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.990856435s
May  9 15:34:40.129: INFO: Pod "pod-subpath-test-dynamicpv-7w5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.101064108s
STEP: Saw pod success
May  9 15:34:40.129: INFO: Pod "pod-subpath-test-dynamicpv-7w5b" satisfied condition "Succeeded or Failed"
May  9 15:34:40.238: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod pod-subpath-test-dynamicpv-7w5b container test-container-subpath-dynamicpv-7w5b: <nil>
STEP: delete the pod
May  9 15:34:40.497: INFO: Waiting for pod pod-subpath-test-dynamicpv-7w5b to disappear
May  9 15:34:40.605: INFO: Pod pod-subpath-test-dynamicpv-7w5b no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7w5b
May  9 15:34:40.605: INFO: Deleting pod "pod-subpath-test-dynamicpv-7w5b" in namespace "provisioning-6205"
... skipping 23 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support readOnly file specified in the volumeMount [LinuxOnly]
    test/e2e/storage/testsuites/subpath.go:382
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":32,"completed":16,"skipped":955,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath 
  should support existing directory
  test/e2e/storage/testsuites/subpath.go:207
... skipping 17 lines ...
May  9 15:35:23.504: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comfctdq] to have phase Bound
May  9 15:35:23.613: INFO: PersistentVolumeClaim test.csi.azure.comfctdq found but phase is Pending instead of Bound.
May  9 15:35:25.722: INFO: PersistentVolumeClaim test.csi.azure.comfctdq found but phase is Pending instead of Bound.
May  9 15:35:27.832: INFO: PersistentVolumeClaim test.csi.azure.comfctdq found and phase=Bound (4.32834395s)
STEP: Creating pod pod-subpath-test-dynamicpv-9crd
STEP: Creating a pod to test subpath
May  9 15:35:28.160: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9crd" in namespace "provisioning-4913" to be "Succeeded or Failed"
May  9 15:35:28.269: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 108.456505ms
May  9 15:35:30.379: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218561874s
May  9 15:35:32.489: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328866459s
May  9 15:35:34.599: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439122579s
May  9 15:35:36.709: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549060195s
May  9 15:35:38.820: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659619373s
May  9 15:35:40.931: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.771365803s
May  9 15:35:43.041: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.88136363s
May  9 15:35:45.153: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.992491124s
May  9 15:35:47.262: INFO: Pod "pod-subpath-test-dynamicpv-9crd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.102369333s
STEP: Saw pod success
May  9 15:35:47.263: INFO: Pod "pod-subpath-test-dynamicpv-9crd" satisfied condition "Succeeded or Failed"
May  9 15:35:47.372: INFO: Trying to get logs from node k8s-agentpool1-35373899-vmss000001 pod pod-subpath-test-dynamicpv-9crd container test-container-volume-dynamicpv-9crd: <nil>
STEP: delete the pod
May  9 15:35:47.602: INFO: Waiting for pod pod-subpath-test-dynamicpv-9crd to disappear
May  9 15:35:47.711: INFO: Pod pod-subpath-test-dynamicpv-9crd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9crd
May  9 15:35:47.711: INFO: Deleting pod "pod-subpath-test-dynamicpv-9crd" in namespace "provisioning-4913"
... skipping 23 lines ...
test/e2e/storage/external/external.go:174
  [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/framework/testsuite.go:50
    should support existing directory
    test/e2e/storage/testsuites/subpath.go:207
------------------------------
{"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":32,"completed":17,"skipped":991,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
May  9 15:36:29.604: INFO: Running AfterSuite actions on all nodes
May  9 15:36:29.604: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2
May  9 15:36:29.604: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2
... skipping 15 lines ...
May  9 15:36:29.668: INFO: Running AfterSuite actions on node 1



Summarizing 1 Failure:

[Fail] External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [Measurement] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] 
test/e2e/storage/testsuites/multivolume.go:696

Ran 91 of 7227 Specs in 3667.859 seconds
FAIL! -- 90 Passed | 1 Failed | 0 Pending | 7136 Skipped 

Ginkgo ran 1 suite in 1h1m12.475138775s
Test Suite Failed
+ print_logs
+ sed -i s/disk.csi.azure.com/test.csi.azure.com/g deploy/example/storageclass-azuredisk-csi.yaml
+ '[' '!' -z ']'
+ bash ./hack/verify-examples.sh linux azurepubliccloud ephemeral test
begin to create deployment examples ...
storageclass.storage.k8s.io/managed-csi created
... skipping 81 lines ...
Platform: linux/amd64
Topology Key: topology.test.csi.azure.com/zone

Streaming logs below:
I0509 14:35:13.672865       1 azuredisk.go:168] driver userAgent: test.csi.azure.com/v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d gc/go1.18.1 (amd64-linux) e2e-test
I0509 14:35:13.673235       1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider
W0509 14:35:13.697234       1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0509 14:35:13.697253       1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider
I0509 14:35:13.697260       1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0509 14:35:13.697296       1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully
I0509 14:35:13.699218       1 azure_auth.go:245] Using AzurePublicCloud environment
I0509 14:35:13.699247       1 azure_auth.go:96] azure: using managed identity extension to retrieve access token
I0509 14:35:13.699253       1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token
I0509 14:35:13.699282       1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for c191756c-7302-4f68-9385-ab9a686214e3. Invalid resource Id format
I0509 14:35:13.699316       1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
I0509 14:35:13.699372       1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:13.699388       1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:13.699402       1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:13.699408       1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:13.699428       1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20
... skipping 167 lines ...
I0509 14:35:29.880341       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.405884001 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f" result_code="succeeded"
I0509 14:35:29.880410       1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-560261ac-f81f-438d-8233-932c6a6e085f","csi.storage.k8s.io/pvc/name":"inline-volume-tester-mxprl-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-3816","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f"}}
I0509 14:35:29.883053       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 14:35:29.883081       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1","parameters":{"csi.storage.k8s.io/pv/name":"pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1","csi.storage.k8s.io/pvc/name":"test.csi.azure.comj5wnr","csi.storage.k8s.io/pvc/namespace":"multivolume-8263"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0509 14:35:29.883246       1 controllerserver.go:174] begin to create azure disk(pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) diskZone() maxShares(0)
I0509 14:35:29.883270       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1 StorageAccountType:StandardSSD_LRS Size:5
I0509 14:35:30.415686       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d40b22d1-a9b1-4666-9984-c2fca9cb47a3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d40b22d1-a9b1-4666-9984-c2fca9cb47a3  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:35:30.415746       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-ac62c6ae-6f08-4956-91c4-e1bdee1dbba5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac62c6ae-6f08-4956-91c4-e1bdee1dbba5  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:35:31.448786       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 14:35:31.448814       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-560261ac-f81f-438d-8233-932c6a6e085f","csi.storage.k8s.io/pvc/name":"inline-volume-tester-mxprl-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-3816","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f"}
I0509 14:35:31.486360       1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f to node k8s-agentpool1-35373899-vmss000002.
I0509 14:35:31.486408       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:35:31.527388       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:31.527459       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f to node k8s-agentpool1-35373899-vmss000002
I0509 14:35:31.527623       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:35:31.575828       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:31.575912       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f lun 0 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-560261ac-f81f-438d-8233-932c6a6e085f  false 0})]
I0509 14:35:31.575934       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:35:31.639164       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:31.639227       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-560261ac-f81f-438d-8233-932c6a6e085f  false 0})])
I0509 14:35:31.839767       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-560261ac-f81f-438d-8233-932c6a6e085f  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:35:31.859524       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 14:35:31.859551       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-af677367-dee2-4788-aa04-d0efc44e3fc0","parameters":{"csi.storage.k8s.io/pv/name":"pvc-af677367-dee2-4788-aa04-d0efc44e3fc0","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4l8rc","csi.storage.k8s.io/pvc/namespace":"multivolume-9758"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}}]}
I0509 14:35:31.859727       1 controllerserver.go:174] begin to create azure disk(pvc-af677367-dee2-4788-aa04-d0efc44e3fc0) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) diskZone() maxShares(0)
I0509 14:35:31.859747       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-af677367-dee2-4788-aa04-d0efc44e3fc0 StorageAccountType:StandardSSD_LRS Size:5
I0509 14:35:32.078290       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 14:35:32.078315       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed","parameters":{"csi.storage.k8s.io/pv/name":"pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed","csi.storage.k8s.io/pvc/name":"test.csi.azure.com9ttxt","csi.storage.k8s.io/pvc/namespace":"multivolume-6413"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}}]}
... skipping 65 lines ...
I0509 14:35:45.851775       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-97a4d803-28bf-4b99-9837-21d612f2cab5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-97a4d803-28bf-4b99-9837-21d612f2cab5  false 1})])
I0509 14:35:45.943862       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:45.943948       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed lun 1 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c07ea111-2877-4cc0-bd44-6a176968190b  false 2})]
I0509 14:35:45.944020       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:35:45.999128       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:45.999188       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c07ea111-2877-4cc0-bd44-6a176968190b  false 2})])
I0509 14:35:46.100070       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-97a4d803-28bf-4b99-9837-21d612f2cab5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-97a4d803-28bf-4b99-9837-21d612f2cab5  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:35:46.387889       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c07ea111-2877-4cc0-bd44-6a176968190b  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:35:47.015797       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f attached to node k8s-agentpool1-35373899-vmss000002.
I0509 14:35:47.015840       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f to node k8s-agentpool1-35373899-vmss000002 successfully
I0509 14:35:47.015886       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.529498457 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 14:35:47.015888       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:35:47.015903       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 14:35:47.069677       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:47.069779       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1 lun 3 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0946260d-ca42-48d6-ab60-86dd305a00b5  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1  false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-af677367-dee2-4788-aa04-d0efc44e3fc0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af677367-dee2-4788-aa04-d0efc44e3fc0  false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961  false 1})]
I0509 14:35:47.069843       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:35:47.112311       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:35:47.112421       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0946260d-ca42-48d6-ab60-86dd305a00b5  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1  false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-af677367-dee2-4788-aa04-d0efc44e3fc0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af677367-dee2-4788-aa04-d0efc44e3fc0  false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961  false 1})])
I0509 14:35:47.398233       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0946260d-ca42-48d6-ab60-86dd305a00b5  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1  false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-af677367-dee2-4788-aa04-d0efc44e3fc0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af677367-dee2-4788-aa04-d0efc44e3fc0  false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:35:53.923775       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 14:35:53.923813       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729","parameters":{"csi.storage.k8s.io/pv/name":"pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729","csi.storage.k8s.io/pvc/name":"inline-volume-tester2-npbgv-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-3816"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0509 14:35:53.924113       1 controllerserver.go:174] begin to create azure disk(pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) diskZone() maxShares(0)
I0509 14:35:53.924144       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729 StorageAccountType:StandardSSD_LRS Size:5
I0509 14:35:56.274816       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729 StorageAccountType:StandardSSD_LRS Size:5
I0509 14:35:56.274887       1 controllerserver.go:258] create azure disk(pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729 kubernetes.io-created-for-pvc-name:inline-volume-tester2-npbgv-my-volume-0 kubernetes.io-created-for-pvc-namespace:ephemeral-3816]) successfully
... skipping 56 lines ...
I0509 14:36:01.216353       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 14:36:01.285053       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:36:01.285112       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729 lun 2 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729  false 2})]
I0509 14:36:01.285140       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:36:01.336115       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:36:01.336169       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729  false 2})])
I0509 14:36:01.496283       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:36:01.509861       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed attached to node k8s-agentpool1-35373899-vmss000000.
I0509 14:36:01.510005       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed to node k8s-agentpool1-35373899-vmss000000 successfully
I0509 14:36:01.510043       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=24.670696376 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 14:36:01.510058       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 14:36:01.509969       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:36:01.562452       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
... skipping 194 lines ...
I0509 14:37:11.300474       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:11.300541       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed  false 1})])
I0509 14:37:11.320696       1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1 to node k8s-agentpool1-35373899-vmss000001.
I0509 14:37:11.320741       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:11.372992       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:11.373070       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1 to node k8s-agentpool1-35373899-vmss000001
I0509 14:37:11.519134       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:37:11.974352       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 14:37:11.974378       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961","csi.storage.k8s.io/pvc/name":"test.csi.azure.compxf7l","csi.storage.k8s.io/pvc/namespace":"multivolume-9758","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961"}
I0509 14:37:12.035473       1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961 to node k8s-agentpool1-35373899-vmss000000.
I0509 14:37:12.035530       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:12.065791       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:12.065874       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961 to node k8s-agentpool1-35373899-vmss000000
... skipping 19 lines ...
I0509 14:37:12.686862       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0946260d-ca42-48d6-ab60-86dd305a00b5  false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-32457397-56d7-4c19-8c61-a2e28b0c1738  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1  false 3})]
I0509 14:37:12.686896       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:12.717635       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:12.717714       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0946260d-ca42-48d6-ab60-86dd305a00b5  false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-32457397-56d7-4c19-8c61-a2e28b0c1738  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1  false 3})])
I0509 14:37:12.717826       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:12.758474       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:12.944895       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0946260d-ca42-48d6-ab60-86dd305a00b5  false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-32457397-56d7-4c19-8c61-a2e28b0c1738  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1  false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:37:15.718647       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-ac62c6ae-6f08-4956-91c4-e1bdee1dbba5:pvc-ac62c6ae-6f08-4956-91c4-e1bdee1dbba5 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:pvc-c07ea111-2877-4cc0-bd44-6a176968190b]) returned with <nil>
I0509 14:37:15.718719       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-c07ea111-2877-4cc0-bd44-6a176968190b, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b) succeeded
I0509 14:37:15.718767       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b from node k8s-agentpool1-35373899-vmss000000 successfully
I0509 14:37:15.718825       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=50.952692686 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 14:37:15.718848       1 utils.go:84] GRPC response: {}
I0509 14:37:15.718964       1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-ac62c6ae-6f08-4956-91c4-e1bdee1dbba5 from node k8s-agentpool1-35373899-vmss000000, diskMap: map[]
... skipping 14 lines ...
I0509 14:37:15.930330       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:15.930360       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:15.930486       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-af677367-dee2-4788-aa04-d0efc44e3fc0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af677367-dee2-4788-aa04-d0efc44e3fc0  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961  false 0})])
I0509 14:37:15.995101       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:15.995174       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:16.050587       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:16.170333       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-af677367-dee2-4788-aa04-d0efc44e3fc0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af677367-dee2-4788-aa04-d0efc44e3fc0  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dcafe8c5-092f-4b01-a7c3-59e65d2d3961  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:37:18.611249       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 14:37:18.611286       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c07ea111-2877-4cc0-bd44-6a176968190b","csi.storage.k8s.io/pvc/name":"test.csi.azure.comnhszn","csi.storage.k8s.io/pvc/namespace":"multivolume-6413","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b"}
I0509 14:37:18.635486       1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b to node k8s-agentpool1-35373899-vmss000002.
I0509 14:37:18.635514       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:18.681432       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:18.681483       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b to node k8s-agentpool1-35373899-vmss000002
... skipping 96 lines ...
I0509 14:37:47.103794       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:47.103838       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000002, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 14:37:47.170499       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b lun 2 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c07ea111-2877-4cc0-bd44-6a176968190b  false 2})]
I0509 14:37:47.170545       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:47.228249       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:47.228311       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c07ea111-2877-4cc0-bd44-6a176968190b  false 2})])
I0509 14:37:47.235903       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0a6dc187-46ac-4491-bd9f-332a16673a24:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0a6dc187-46ac-4491-bd9f-332a16673a24  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:37:47.431444       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c07ea111-2877-4cc0-bd44-6a176968190b  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:37:49.188648       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-87b8f8bd-af69-456c-9919-59cfe5379b24 StorageAccountType:StandardSSD_LRS Size:5
I0509 14:37:49.188717       1 controllerserver.go:258] create azure disk(pvc-87b8f8bd-af69-456c-9919-59cfe5379b24) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-87b8f8bd-af69-456c-9919-59cfe5379b24 kubernetes.io-created-for-pvc-name:test.csi.azure.com4qphn kubernetes.io-created-for-pvc-namespace:multivolume-130]) successfully
I0509 14:37:49.188775       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.425953581 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24" result_code="succeeded"
I0509 14:37:49.188795       1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-87b8f8bd-af69-456c-9919-59cfe5379b24","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4qphn","csi.storage.k8s.io/pvc/namespace":"multivolume-130","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24"}}
I0509 14:37:51.481237       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 14:37:51.481270       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187","csi.storage.k8s.io/pvc/name":"test.csi.azure.comqg4bh","csi.storage.k8s.io/pvc/namespace":"multivolume-130","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187"}
... skipping 28 lines ...
I0509 14:37:57.588552       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:37:57.619809       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:37:57.619863       1 azure_controller_common.go:453] azureDisk - find disk: lun 2 name pvc-c07ea111-2877-4cc0-bd44-6a176968190b uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b
I0509 14:37:57.619899       1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b is already attached to node k8s-agentpool1-35373899-vmss000002 at lun 2.
I0509 14:37:57.619942       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.031398524 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c07ea111-2877-4cc0-bd44-6a176968190b" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 14:37:57.619959       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"2"}}
I0509 14:37:57.682318       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-87b8f8bd-af69-456c-9919-59cfe5379b24  false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:37:58.578644       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729:pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-97a4d803-28bf-4b99-9837-21d612f2cab5:pvc-97a4d803-28bf-4b99-9837-21d612f2cab5]) returned with <nil>
I0509 14:37:58.578714       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729) succeeded
I0509 14:37:58.578733       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729 from node k8s-agentpool1-35373899-vmss000001 successfully
I0509 14:37:58.578778       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=60.792599582 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 14:37:58.578789       1 utils.go:84] GRPC response: {}
I0509 14:37:58.578851       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
... skipping 225 lines ...
I0509 14:39:07.199400       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729) returned with <nil>
I0509 14:39:07.199439       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.232831194 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-71999fba-0f09-4a8b-9f00-c7c1460f1729" result_code="succeeded"
I0509 14:39:07.199461       1 utils.go:84] GRPC response: {}
I0509 14:39:07.203589       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0509 14:39:07.203612       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed"}
I0509 14:39:07.203738       1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed)
I0509 14:39:07.203752       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed) since it's in attaching or detaching state
I0509 14:39:07.203863       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.6301e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed" result_code="failed"
E0509 14:39:07.203894       1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed) since it's in attaching or detaching state
I0509 14:39:07.379548       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0509 14:39:07.379592       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738"}
I0509 14:39:07.379690       1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738)
I0509 14:39:07.379706       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738) since it's in attaching or detaching state
I0509 14:39:07.379768       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=3.1101e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738" result_code="failed"
E0509 14:39:07.379782       1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738) since it's in attaching or detaching state
I0509 14:39:09.534218       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed:pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed]) returned with <nil>
I0509 14:39:09.534283       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed) succeeded
I0509 14:39:09.534295       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed from node k8s-agentpool1-35373899-vmss000002 successfully
I0509 14:39:09.534340       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.525889325 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9723eee0-ce84-4333-9fc2-55dcddeb34ed" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 14:39:09.534351       1 utils.go:84] GRPC response: {}
I0509 14:39:12.351056       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
... skipping 38 lines ...
I0509 14:39:27.268252       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:39:27.312991       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:39:27.313032       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a lun 1 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a  false 1})]
I0509 14:39:27.313077       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:39:27.367380       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:39:27.367429       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a  false 1})])
I0509 14:39:27.569610       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:39:29.523385       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5
I0509 14:39:29.523419       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5) returned with <nil>
I0509 14:39:29.523448       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.264219759 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0946260d-ca42-48d6-ab60-86dd305a00b5" result_code="succeeded"
I0509 14:39:29.523461       1 utils.go:84] GRPC response: {}
I0509 14:39:33.122635       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 14:39:33.122660       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-6caa50a0-91df-4ad2-a044-84004caea171","parameters":{"csi.storage.k8s.io/pv/name":"pvc-6caa50a0-91df-4ad2-a044-84004caea171","csi.storage.k8s.io/pvc/name":"test.csi.azure.comnzrzh","csi.storage.k8s.io/pvc/namespace":"provisioning-6541"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
... skipping 20 lines ...
I0509 14:39:38.404405       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1) returned with <nil>
I0509 14:39:38.404449       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.273694477 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-3d76a688-4e6a-42a3-8ad3-8be46873ccf1" result_code="succeeded"
I0509 14:39:38.404470       1 utils.go:84] GRPC response: {}
I0509 14:39:39.381675       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0509 14:39:39.381702       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738"}
I0509 14:39:39.381804       1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738)
I0509 14:39:39.381821       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738) since it's in attaching or detaching state
I0509 14:39:39.381873       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=3.55e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738" result_code="failed"
E0509 14:39:39.381899       1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-32457397-56d7-4c19-8c61-a2e28b0c1738) since it's in attaching or detaching state
I0509 14:39:39.850206       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187:pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24:pvc-87b8f8bd-af69-456c-9919-59cfe5379b24]) returned with <nil>
I0509 14:39:39.850264       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-0a6dc187-46ac-4491-bd9f-332a16673a24, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0a6dc187-46ac-4491-bd9f-332a16673a24) succeeded
I0509 14:39:39.850288       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0a6dc187-46ac-4491-bd9f-332a16673a24 from node k8s-agentpool1-35373899-vmss000000 successfully
I0509 14:39:39.850318       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=81.639929856 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0a6dc187-46ac-4491-bd9f-332a16673a24" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 14:39:39.850333       1 utils.go:84] GRPC response: {}
I0509 14:39:39.850847       1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-542fc3b7-24d1-4a89-aa88-eb7cb9b1a187 from node k8s-agentpool1-35373899-vmss000000, diskMap: map[]
... skipping 40 lines ...
I0509 14:39:41.824207       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:39:41.824245       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 14:39:41.900620       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6caa50a0-91df-4ad2-a044-84004caea171 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6caa50a0-91df-4ad2-a044-84004caea171:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6caa50a0-91df-4ad2-a044-84004caea171  false 0})]
I0509 14:39:41.900669       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:39:41.945437       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:39:41.945480       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6caa50a0-91df-4ad2-a044-84004caea171:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6caa50a0-91df-4ad2-a044-84004caea171  false 0})])
I0509 14:39:42.158808       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6caa50a0-91df-4ad2-a044-84004caea171:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6caa50a0-91df-4ad2-a044-84004caea171  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:39:43.596464       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b StorageAccountType:StandardSSD_LRS Size:5
I0509 14:39:43.596530       1 controllerserver.go:258] create azure disk(pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b kubernetes.io-created-for-pvc-name:pvc-rbt8z kubernetes.io-created-for-pvc-namespace:provisioning-8498]) successfully
I0509 14:39:43.596583       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.438613607 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b" result_code="succeeded"
I0509 14:39:43.596596       1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":{"Volume":{"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0a6dc187-46ac-4491-bd9f-332a16673a24"}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b","csi.storage.k8s.io/pvc/name":"pvc-rbt8z","csi.storage.k8s.io/pvc/namespace":"provisioning-8498","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b"}}
I0509 14:39:43.653618       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3 StorageAccountType:StandardSSD_LRS Size:5
I0509 14:39:43.653706       1 controllerserver.go:258] create azure disk(pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3 kubernetes.io-created-for-pvc-name:pvc-sszqv kubernetes.io-created-for-pvc-namespace:provisioning-8498]) successfully
... skipping 55 lines ...
I0509 14:39:46.986570       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b to node k8s-agentpool1-35373899-vmss000002
I0509 14:39:46.986579       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:39:47.043648       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:39:47.043727       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9eb86506-8404-4087-b04c-dcc8fddadc08:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9eb86506-8404-4087-b04c-dcc8fddadc08  false 0})])
I0509 14:39:47.099470       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:39:47.099529       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f  false 2})])
I0509 14:39:47.325912       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9eb86506-8404-4087-b04c-dcc8fddadc08:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9eb86506-8404-4087-b04c-dcc8fddadc08  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:39:47.344524       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:39:47.533699       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe StorageAccountType:StandardSSD_LRS Size:5
I0509 14:39:47.533764       1 controllerserver.go:258] create azure disk(pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe kubernetes.io-created-for-pvc-name:test.csi.azure.comfg54d kubernetes.io-created-for-pvc-namespace:multivolume-3000]) successfully
I0509 14:39:47.533818       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.502790389 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe" result_code="succeeded"
I0509 14:39:47.533849       1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe","csi.storage.k8s.io/pvc/name":"test.csi.azure.comfg54d","csi.storage.k8s.io/pvc/namespace":"multivolume-3000","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe"}}
I0509 14:39:49.679920       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 14:39:49.679956       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-560261ac-f81f-438d-8233-932c6a6e085f"}
... skipping 71 lines ...
I0509 14:40:02.525787       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 14:40:02.525808       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f","csi.storage.k8s.io/pvc/name":"pvc-k86x2","csi.storage.k8s.io/pvc/namespace":"provisioning-8498","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f"}
I0509 14:40:02.594915       1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f to node k8s-agentpool1-35373899-vmss000002.
I0509 14:40:02.637585       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:40:02.637666       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b lun 5 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe  false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-7ea6e4f5-fc0c-4985-b1f2-7bb6be77a9e1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7ea6e4f5-fc0c-4985-b1f2-7bb6be77a9e1  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b  false 5})]
I0509 14:40:02.637690       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:40:02.694860       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3  false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-7c2e1a23-a26a-465b-9093-b1b698fcab8f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7c2e1a23-a26a-465b-9093-b1b698fcab8f  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:40:02.699092       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:40:02.699162       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:40:02.699315       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe  false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-7ea6e4f5-fc0c-4985-b1f2-7bb6be77a9e1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7ea6e4f5-fc0c-4985-b1f2-7bb6be77a9e1  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b  false 5})])
I0509 14:40:02.766741       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:40:02.766788       1 azure_controller_common.go:453] azureDisk - find disk: lun 2 name pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f
I0509 14:40:02.766806       1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f is already attached to node k8s-agentpool1-35373899-vmss000002 at lun 2.
I0509 14:40:02.766865       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.171933124 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6df82a10-13e8-4c3c-8eee-44f9305f091f" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 14:40:02.766885       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"2"}}
I0509 14:40:03.022444       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4b4e8a49-5de7-4a4e-99ea-c31e9a3cd2fe  false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-7ea6e4f5-fc0c-4985-b1f2-7bb6be77a9e1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7ea6e4f5-fc0c-4985-b1f2-7bb6be77a9e1  false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b93b3ccd-8fac-43ff-85c9-466a75e1529b  false 5})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:40:04.731358       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24
I0509 14:40:04.731394       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24) returned with <nil>
I0509 14:40:04.731451       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.2857931 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-87b8f8bd-af69-456c-9919-59cfe5379b24" result_code="succeeded"
I0509 14:40:04.731470       1 utils.go:84] GRPC response: {}
I0509 14:40:11.300280       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 14:40:11.300314       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8","parameters":{"csi.storage.k8s.io/pv/name":"pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8","csi.storage.k8s.io/pvc/name":"test.csi.azure.comfxrts","csi.storage.k8s.io/pvc/namespace":"volume-741"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
... skipping 20 lines ...
I0509 14:40:16.115774       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:40:16.154218       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:40:16.154293       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8 lun 1 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8  false 1})]
I0509 14:40:16.154315       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:40:16.197568       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:40:16.197651       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8  false 1})])
I0509 14:40:16.446046       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2ef89767-e811-4c5e-bf59-d3cf8a51dba8  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:40:17.873964       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3 attached to node k8s-agentpool1-35373899-vmss000001.
I0509 14:40:17.874021       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3 to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 14:40:17.874087       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:40:17.874129       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=33.224788664 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 14:40:17.874143       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 14:40:17.916091       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
... skipping 264 lines ...
I0509 14:41:57.800148       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:41:57.800190       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000000, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 14:41:57.920241       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a  false 0})]
I0509 14:41:57.920284       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 14:41:57.964879       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 14:41:57.964946       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a  false 0})])
I0509 14:41:58.189581       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4c610b3c-c967-4993-91e8-7eab4f1bdd9a  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 14:41:59.460675       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6caa50a0-91df-4ad2-a044-84004caea171:pvc-6caa50a0-91df-4ad2-a044-84004caea171]) returned with <nil>
I0509 14:41:59.460720       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3) succeeded
I0509 14:41:59.460768       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3 from node k8s-agentpool1-35373899-vmss000001 successfully
I0509 14:41:59.460794       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=75.872051298 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5b9c2d48-b580-46d5-8f22-e97129b625d3" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 14:41:59.460884       1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6caa50a0-91df-4ad2-a044-84004caea171 from node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-7c2e1a23-a26a-465b-9093-b1b698fcab8f:pvc-7c2e1a23-a26a-465b-9093-b1b698fcab8f]
I0509 14:41:59.461064       1 utils.go:84] GRPC response: {}
... skipping 9904 lines ...
I0509 15:18:22.134500       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:18:22.164329       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:18:22.164407       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b804c726-e9df-4b4a-b799-797355decc2d lun 1 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b804c726-e9df-4b4a-b799-797355decc2d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b804c726-e9df-4b4a-b799-797355decc2d  false 1})]
I0509 15:18:22.164432       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:18:22.206886       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:18:22.206956       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b804c726-e9df-4b4a-b799-797355decc2d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b804c726-e9df-4b4a-b799-797355decc2d  false 1})])
I0509 15:18:22.396321       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b804c726-e9df-4b4a-b799-797355decc2d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b804c726-e9df-4b4a-b799-797355decc2d  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:18:24.870540       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteSnapshot
I0509 15:18:24.870570       1 utils.go:78] GRPC request: {"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/snapshots/snapshot-4b49b36d-c777-4173-9233-9155383fe7d3"}
I0509 15:18:24.870689       1 controllerserver.go:899] begin to delete snapshot(snapshot-4b49b36d-c777-4173-9233-9155383fe7d3) under rg(kubetest-rxirza6l)
I0509 15:18:24.954113       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:18:24.954141       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-cb84d307-3c47-4d1e-ab2f-aefe05d6cf6d"}
I0509 15:18:24.954272       1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-cb84d307-3c47-4d1e-ab2f-aefe05d6cf6d from node k8s-agentpool1-35373899-vmss000001
... skipping 73 lines ...
I0509 15:18:48.014204       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b  false 0})])
I0509 15:18:48.014275       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:18:48.068138       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:18:48.231145       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0509 15:18:48.231170       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6"}
I0509 15:18:48.231270       1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6)
I0509 15:18:48.283237       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:18:50.929559       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:18:50.929608       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf"}
I0509 15:18:50.929764       1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf from node k8s-agentpool1-35373899-vmss000002
I0509 15:18:50.929796       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:18:50.964958       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:18:50.964992       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000002, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
... skipping 41 lines ...
I0509 15:19:07.988373       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:19:07.988413       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 15:19:08.101099       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62f37126-a886-4763-acc6-5398bba84eb3  false 0})]
I0509 15:19:08.101165       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:19:08.149708       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:19:08.149755       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62f37126-a886-4763-acc6-5398bba84eb3  false 0})])
I0509 15:19:08.409581       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62f37126-a886-4763-acc6-5398bba84eb3  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:19:08.500465       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b attached to node k8s-agentpool1-35373899-vmss000002.
I0509 15:19:08.500501       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b to node k8s-agentpool1-35373899-vmss000002 successfully
I0509 15:19:08.500533       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=59.367226916999996 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 15:19:08.500569       1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6 from node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6:pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf:pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf]
I0509 15:19:08.500653       1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf
I0509 15:19:08.500671       1 azure_controller_vmss.go:197] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6:pvc-10a2bfd1-be1f-4421-80d0-2aea92cf80b6 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf:pvc-d058647b-7c94-4bc7-96e7-fd0d9666a1bf])
... skipping 58 lines ...
I0509 15:19:29.075925       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:19:29.108595       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:19:29.108676       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271 lun 1 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271  false 1})]
I0509 15:19:29.108706       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:19:29.156086       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:19:29.156168       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271  false 1})])
I0509 15:19:29.343525       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:19:30.272502       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-c3a47f44-7e79-4c4e-8f0f-73534c6c5579:pvc-c3a47f44-7e79-4c4e-8f0f-73534c6c5579]) returned with <nil>
I0509 15:19:30.272574       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-c3a47f44-7e79-4c4e-8f0f-73534c6c5579, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c3a47f44-7e79-4c4e-8f0f-73534c6c5579) succeeded
I0509 15:19:30.272587       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c3a47f44-7e79-4c4e-8f0f-73534c6c5579 from node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:19:30.272615       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=45.953116052 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-c3a47f44-7e79-4c4e-8f0f-73534c6c5579" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:19:30.272629       1 utils.go:84] GRPC response: {}
I0509 15:19:31.265726       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
... skipping 54 lines ...
I0509 15:20:00.172732       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:20:00.202541       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:20:00.202640       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b  false 0})]
I0509 15:20:00.202668       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:20:00.232002       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:20:00.232095       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b  false 0})])
I0509 15:20:00.440679       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:20:01.602605       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:20:01.602637       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271"}
I0509 15:20:01.602779       1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271 from node k8s-agentpool1-35373899-vmss000002
I0509 15:20:01.602833       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:20:01.658915       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:20:01.658979       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000002, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
... skipping 79 lines ...
I0509 15:20:22.534680       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29  false 0})])
I0509 15:20:22.692181       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3:pvc-62f37126-a886-4763-acc6-5398bba84eb3]) returned with <nil>
I0509 15:20:22.692233       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-62f37126-a886-4763-acc6-5398bba84eb3, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3) succeeded
I0509 15:20:22.692248       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3 from node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:20:22.692288       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.438838191 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:20:22.692299       1 utils.go:84] GRPC response: {}
I0509 15:20:22.709779       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:20:22.784389       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 15:20:22.784416       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f","csi.storage.k8s.io/pvc/name":"test.csi.azure.comnqjc8","csi.storage.k8s.io/pvc/namespace":"provisioning-4011"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0509 15:20:22.784603       1 controllerserver.go:174] begin to create azure disk(pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) diskZone() maxShares(0)
I0509 15:20:22.784634       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f StorageAccountType:StandardSSD_LRS Size:5
I0509 15:20:25.211996       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f StorageAccountType:StandardSSD_LRS Size:5
I0509 15:20:25.212060       1 controllerserver.go:258] create azure disk(pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f kubernetes.io-created-for-pvc-name:test.csi.azure.comnqjc8 kubernetes.io-created-for-pvc-namespace:provisioning-4011]) successfully
... skipping 38 lines ...
I0509 15:20:32.948683       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:20:32.979370       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:20:32.979440       1 azure_controller_common.go:453] azureDisk - find disk: lun 0 name pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29
I0509 15:20:32.979456       1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29 is already attached to node k8s-agentpool1-35373899-vmss000002 at lun 0.
I0509 15:20:32.979518       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.052869656 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 15:20:32.979554       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:20:33.134603       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:20:33.203785       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271
I0509 15:20:33.203851       1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271) returned with <nil>
I0509 15:20:33.203911       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.266786177 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8b9e3e48-f2b4-498d-b7f8-d81144ab4271" result_code="succeeded"
I0509 15:20:33.203932       1 utils.go:84] GRPC response: {}
I0509 15:20:42.254442       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:20:42.254472       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0ac338e7-35a7-4e4c-9645-c6e92d8b920b"}
... skipping 22 lines ...
I0509 15:20:47.301647       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:20:47.346262       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:20:47.346347       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648  false 0})]
I0509 15:20:47.346378       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:20:47.405473       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:20:47.405540       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648  false 0})])
I0509 15:20:47.723180       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:20:54.080815       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0509 15:20:54.080853       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3"}
I0509 15:20:54.080975       1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-62f37126-a886-4763-acc6-5398bba84eb3)
I0509 15:20:58.453302       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f attached to node k8s-agentpool1-35373899-vmss000002.
I0509 15:20:58.453344       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f to node k8s-agentpool1-35373899-vmss000002 successfully
I0509 15:20:58.453381       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=30.900659056 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-ae9a6f18-6ffb-482a-85e0-2ff2f22bd33f" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
... skipping 69 lines ...
I0509 15:21:15.681551       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e lun 1 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e  false 1})]
I0509 15:21:15.720673       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:21:15.720749       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e4f0c9cc-9f03-44a5-befb-9394ce217583 to node k8s-agentpool1-35373899-vmss000001
I0509 15:21:15.720776       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:21:15.764056       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:21:15.764114       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e  false 1})])
I0509 15:21:15.973286       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:21:46.268991       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:21:46.269034       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:21:46.269117       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:21:46.269144       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=30.662343807 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:21:46.269163       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:21:46.276016       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
... skipping 8 lines ...
I0509 15:21:46.435233       1 azure_controller_common.go:453] azureDisk - find disk: lun 1 name pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e
I0509 15:21:46.435256       1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e is already attached to node k8s-agentpool1-35373899-vmss000001 at lun 1.
I0509 15:21:46.435292       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.13520109 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-4d5c50a4-f8fb-4558-8ff5-5331314fd62e" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:21:46.435310       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:21:46.470274       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:21:46.470338       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e4f0c9cc-9f03-44a5-befb-9394ce217583:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e4f0c9cc-9f03-44a5-befb-9394ce217583  false 2})])
I0509 15:21:46.759243       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e4f0c9cc-9f03-44a5-befb-9394ce217583:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e4f0c9cc-9f03-44a5-befb-9394ce217583  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:21:49.202276       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:21:49.202310       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648"}
I0509 15:21:49.202454       1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-acb2d09c-679f-4960-9f79-a1e7a5c87648 from node k8s-agentpool1-35373899-vmss000001
I0509 15:21:49.202473       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:21:49.233348       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:21:49.233414       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
... skipping 56 lines ...
I0509 15:22:24.489048       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:22:24.553608       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:22:24.553677       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413 lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413  false 0})]
I0509 15:22:24.553721       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:22:24.591574       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:22:24.591634       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413  false 0})])
I0509 15:22:24.789258       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:22:34.897418       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413 attached to node k8s-agentpool1-35373899-vmss000000.
I0509 15:22:34.897519       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413 to node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:22:34.897581       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.512028272 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:22:34.897614       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:22:44.860466       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:22:44.860492       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413"}
... skipping 61 lines ...
I0509 15:23:11.809013       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:23:11.849853       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:23:11.849910       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413 lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413  false 0})]
I0509 15:23:11.849956       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:23:11.893595       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:23:11.893648       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413  false 0})])
I0509 15:23:12.090073       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:23:26.304266       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:23:26.304295       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29"}
I0509 15:23:26.304419       1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-bd8731b6-27cb-4e43-bae4-f4cd0ac91c29 from node k8s-agentpool1-35373899-vmss000002
I0509 15:23:26.304439       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:23:26.380026       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:23:26.380130       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
... skipping 88 lines ...
I0509 15:24:09.879902       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:24:09.926777       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:24:09.926811       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6 lun 0 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6  false 0})]
I0509 15:24:09.926833       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:24:09.957948       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:24:09.958014       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6  false 0})])
I0509 15:24:10.185843       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:24:22.355749       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413:pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413]) returned with <nil>
I0509 15:24:22.355821       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413) succeeded
I0509 15:24:22.355836       1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413 from node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:24:22.355964       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.449764139 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-6d67b4ab-f6c8-45c4-9ab1-c81d6918a413" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:24:22.356022       1 utils.go:84] GRPC response: {}
I0509 15:24:25.306653       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6 attached to node k8s-agentpool1-35373899-vmss000002.
... skipping 39 lines ...
I0509 15:24:47.629243       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e to node k8s-agentpool1-35373899-vmss000001
I0509 15:24:47.702324       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:24:47.702364       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb  false 0})]
I0509 15:24:47.702381       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:24:47.746125       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:24:47.746179       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb  false 0})])
I0509 15:24:47.950007       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:24:58.058117       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:24:58.058167       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:24:58.058200       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.562502949 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:24:58.058250       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:24:58.058213       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:24:58.099433       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:24:58.099486       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 15:24:58.175273       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e lun 1 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e  false 1})]
I0509 15:24:58.175441       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:24:58.219238       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:24:58.219324       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e  false 1})])
I0509 15:24:58.423820       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:25:08.636319       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:25:08.636378       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:25:08.636427       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=21.140709047 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:25:08.636441       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:25:08.645360       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:25:08.645381       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-8c8fbe36-923a-43e1-afc0-a605094b28c6"}
... skipping 35 lines ...
I0509 15:25:48.498065       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:25:48.567688       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:25:48.567755       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76 lun 0 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76  false 0})]
I0509 15:25:48.567780       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:25:48.614325       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:25:48.614360       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76  false 0})])
I0509 15:25:48.883060       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bc85c0d5-cf4c-4341-8214-9d0ce93e0a76  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:25:53.198137       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:25:53.198163       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb"}
I0509 15:25:53.198268       1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-991c6faa-740f-49c6-9d58-68c5abb3bafb from node k8s-agentpool1-35373899-vmss000001
I0509 15:25:53.198281       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:25:53.205500       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:25:53.205523       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-545ed8a0-01a4-4215-9cf2-dbccc1a2268e"}
... skipping 98 lines ...
I0509 15:26:58.467421       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:26:58.498651       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:26:58.498685       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5 lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5  false 0})]
I0509 15:26:58.498700       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:26:58.535338       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:26:58.535400       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5  false 0})])
I0509 15:26:58.753579       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:27:01.348741       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 15:27:01.348778       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-2b305fde-ad9a-49e3-8d65-c95116574744","parameters":{"csi.storage.k8s.io/pv/name":"pvc-2b305fde-ad9a-49e3-8d65-c95116574744","csi.storage.k8s.io/pvc/name":"inline-volume-tester-vcvjl-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-8426"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0509 15:27:01.348991       1 controllerserver.go:174] begin to create azure disk(pvc-2b305fde-ad9a-49e3-8d65-c95116574744) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) diskZone() maxShares(0)
I0509 15:27:01.349018       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-2b305fde-ad9a-49e3-8d65-c95116574744 StorageAccountType:StandardSSD_LRS Size:5
I0509 15:27:03.737348       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-2b305fde-ad9a-49e3-8d65-c95116574744 StorageAccountType:StandardSSD_LRS Size:5
I0509 15:27:03.737414       1 controllerserver.go:258] create azure disk(pvc-2b305fde-ad9a-49e3-8d65-c95116574744) account type(StandardSSD_LRS) rg(kubetest-rxirza6l) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-2b305fde-ad9a-49e3-8d65-c95116574744 kubernetes.io-created-for-pvc-name:inline-volume-tester-vcvjl-my-volume-0 kubernetes.io-created-for-pvc-namespace:ephemeral-8426]) successfully
... skipping 9 lines ...
I0509 15:27:04.588476       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:27:04.620009       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:27:04.620095       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b305fde-ad9a-49e3-8d65-c95116574744  false 0})]
I0509 15:27:04.620199       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:27:04.672240       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:27:04.672316       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b305fde-ad9a-49e3-8d65-c95116574744  false 0})])
I0509 15:27:04.880331       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b305fde-ad9a-49e3-8d65-c95116574744  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:27:14.965521       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744 attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:27:14.965581       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744 to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:27:14.965623       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.53163628 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:27:14.965638       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:27:14.986411       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 15:27:14.986436       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2b305fde-ad9a-49e3-8d65-c95116574744","csi.storage.k8s.io/pvc/name":"inline-volume-tester-vcvjl-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-8426","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744"}
... skipping 26 lines ...
I0509 15:27:25.581741       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:27:25.613410       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:27:25.613460       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c lun 0 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9e83c616-8203-45e1-964f-09bac554021c  false 0})]
I0509 15:27:25.613498       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:27:25.668442       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:27:25.668490       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9e83c616-8203-45e1-964f-09bac554021c  false 0})])
I0509 15:27:25.853613       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9e83c616-8203-45e1-964f-09bac554021c  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:27:35.974370       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c attached to node k8s-agentpool1-35373899-vmss000002.
I0509 15:27:35.974427       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c to node k8s-agentpool1-35373899-vmss000002 successfully
I0509 15:27:35.974498       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.457330865 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 15:27:35.974532       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:27:42.004750       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:27:42.004798       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-fba42097-47f1-4ff3-ae41-cf8a65235ba5"}
... skipping 68 lines ...
I0509 15:28:26.226275       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:28:26.265811       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:28:26.265886       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2 lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2  false 0})]
I0509 15:28:26.265913       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:28:26.314291       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:28:26.314370       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2  false 0})])
I0509 15:28:26.479921       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:28:41.616132       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2 attached to node k8s-agentpool1-35373899-vmss000000.
I0509 15:28:41.616168       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2 to node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:28:41.616202       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.509774755 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5fec3169-2155-4f46-9546-8dfe6c86c6c2" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:28:41.616216       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:28:42.074681       1 azure_controller_vmss.go:210] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c:pvc-9e83c616-8203-45e1-964f-09bac554021c]) returned with <nil>
I0509 15:28:42.074744       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-9e83c616-8203-45e1-964f-09bac554021c, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-9e83c616-8203-45e1-964f-09bac554021c) succeeded
... skipping 25 lines ...
I0509 15:29:08.043426       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:29:08.109113       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:29:08.109168       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927 lun 1 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927  false 1})]
I0509 15:29:08.109203       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:29:08.162334       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:29:08.162412       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927  false 1})])
I0509 15:29:08.362415       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:29:18.515957       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927 attached to node k8s-agentpool1-35373899-vmss000000.
I0509 15:29:18.516039       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927 to node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:29:18.516069       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.633510444 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-0e22f966-55bc-44cb-bf3b-106eb3e81927" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:29:18.516081       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:29:28.224438       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:29:28.224475       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-2b305fde-ad9a-49e3-8d65-c95116574744"}
... skipping 94 lines ...
I0509 15:30:49.011529       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-353971cc-2afc-4655-81bb-29bc1679d28c  false 0})]
I0509 15:30:49.042755       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:30:49.042814       1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e to node k8s-agentpool1-35373899-vmss000001
I0509 15:30:49.042828       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:30:49.104065       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:30:49.104113       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-353971cc-2afc-4655-81bb-29bc1679d28c  false 0})])
I0509 15:30:49.291268       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-353971cc-2afc-4655-81bb-29bc1679d28c  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:30:59.384836       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:30:59.384873       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:30:59.384904       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.460484586 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:30:59.384952       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:30:59.384934       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:30:59.427567       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:30:59.427610       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 15:30:59.527431       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e lun 1 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e  false 1})]
I0509 15:30:59.527477       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:30:59.582557       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:30:59.582599       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e  false 1})])
I0509 15:30:59.806441       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:31:14.921275       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:31:14.921340       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:31:14.921409       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=25.972571152 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:31:14.921439       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:31:30.318244       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:31:30.318297       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e"}
... skipping 37 lines ...
I0509 15:32:06.433306       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:32:06.433335       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
I0509 15:32:06.586609       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e  false 0})]
I0509 15:32:06.586672       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:32:06.654277       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:32:06.654323       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e  false 0})])
I0509 15:32:06.883071       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5094ca6e-9f28-49d5-a379-a8c777e5861e  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:32:07.380413       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 15:32:07.380447       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-353971cc-2afc-4655-81bb-29bc1679d28c","csi.storage.k8s.io/pvc/name":"test.csi.azure.comz4xhn","csi.storage.k8s.io/pvc/namespace":"multivolume-5653","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c"}
I0509 15:32:07.435715       1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c to node k8s-agentpool1-35373899-vmss000001.
I0509 15:32:07.435753       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:32:07.470602       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:32:07.470653       1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-35373899-vmss000001, refreshing the cache(vmss: k8s-agentpool1-35373899-vmss, rg: kubetest-rxirza6l)
... skipping 5 lines ...
I0509 15:32:16.986049       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:32:17.036531       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:32:17.036599       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c lun 1 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-353971cc-2afc-4655-81bb-29bc1679d28c  false 1})]
I0509 15:32:17.036618       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:32:17.128113       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:32:17.128168       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-353971cc-2afc-4655-81bb-29bc1679d28c  false 1})])
I0509 15:32:17.362464       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-353971cc-2afc-4655-81bb-29bc1679d28c  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:32:27.472696       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:32:27.472758       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:32:27.472808       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.037076276 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:32:27.472821       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:32:52.306952       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:32:52.306983       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-353971cc-2afc-4655-81bb-29bc1679d28c"}
... skipping 59 lines ...
I0509 15:34:21.255613       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:34:21.330138       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:34:21.330176       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35  false 0})]
I0509 15:34:21.330229       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:34:21.385789       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:34:21.385838       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35  false 0})])
I0509 15:34:21.592534       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:34:31.750523       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35 attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:34:31.750567       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35 to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:34:31.750603       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.705192801 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:34:31.750618       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:34:31.764468       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 15:34:31.764498       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35","csi.storage.k8s.io/pvc/name":"test.csi.azure.com7qpx8","csi.storage.k8s.io/pvc/namespace":"provisioning-6205","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e19c463e-2efd-4b94-a68f-b26a9d88af35"}
... skipping 61 lines ...
I0509 15:35:28.354372       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:35:28.429990       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:35:28.430067       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5a788a77-66ea-48c8-a06e-4c9691eae894  false 0})]
I0509 15:35:28.430090       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:35:28.481130       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:35:28.481221       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5a788a77-66ea-48c8-a06e-4c9691eae894  false 0})])
I0509 15:35:28.704157       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5a788a77-66ea-48c8-a06e-4c9691eae894  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:35:38.849072       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894 attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:35:38.849124       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894 to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:35:38.849156       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.607177448 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:35:38.849185       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:35:56.434061       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0509 15:35:56.434092       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-5a788a77-66ea-48c8-a06e-4c9691eae894"}
... skipping 36 lines ...
I0509 15:36:35.377789       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:36:35.408326       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:36:35.408381       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5 lun 0 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5  false 0})]
I0509 15:36:35.408398       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:36:35.438522       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:36:35.438576       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5  false 0})])
I0509 15:36:35.609798       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:36:45.724743       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5 attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:36:45.724776       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5 to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:36:45.724806       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.555243829 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:36:45.724820       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:36:45.737277       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 15:36:45.737345       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-833056d9-6b7b-4c53-b22a-eb077338ccb5"}
... skipping 22 lines ...
I0509 15:36:56.144874       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:36:56.190958       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:36:56.191009       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407 lun 0 to node k8s-agentpool1-35373899-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407  false 0})]
I0509 15:36:56.191030       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000000 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:36:56.234010       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:36:56.234061       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407  false 0})])
I0509 15:36:56.434198       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:37:06.580147       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407 attached to node k8s-agentpool1-35373899-vmss000000.
I0509 15:37:06.580202       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407 to node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:37:06.580256       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.465390636 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-e76e3f06-40bd-4d36-90e9-ed42403e2407" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:37:06.580282       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:37:16.120934       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 15:37:16.120966       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc","parameters":{"csi.storage.k8s.io/pv/name":"pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","skuName":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
... skipping 12 lines ...
I0509 15:37:19.308080       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:37:19.340985       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:37:19.341040       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc lun 0 to node k8s-agentpool1-35373899-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc  false 0})]
I0509 15:37:19.341058       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000002 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:37:19.384177       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:37:19.384225       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc  false 0})])
I0509 15:37:19.553657       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc  false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:37:29.653499       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc attached to node k8s-agentpool1-35373899-vmss000002.
I0509 15:37:29.653536       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc to node k8s-agentpool1-35373899-vmss000002 successfully
I0509 15:37:29.653569       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.377916986 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc" node="k8s-agentpool1-35373899-vmss000002" result_code="succeeded"
I0509 15:37:29.653583       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}}
I0509 15:37:29.663631       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0509 15:37:29.663651       1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-35373899-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652106914007-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-d88b66c0-f989-4ee0-8756-dc93b4133bcc"}
... skipping 22 lines ...
I0509 15:37:44.926324       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:37:44.980783       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:37:44.980852       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f lun 1 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f  false 1})]
I0509 15:37:44.980869       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:37:45.031651       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:37:45.031739       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f  false 1})])
I0509 15:37:45.244806       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:37:55.372457       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:37:55.372491       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f to node k8s-agentpool1-35373899-vmss000001 successfully
I0509 15:37:55.372521       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.47984536 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dbaeaf5d-2426-4f11-9107-89390a1a672f" node="k8s-agentpool1-35373899-vmss000001" result_code="succeeded"
I0509 15:37:55.372535       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:38:01.779482       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0509 15:38:01.779511       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662","parameters":{"csi.storage.k8s.io/pv/name":"pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662","csi.storage.k8s.io/pvc/name":"daemonset-azuredisk-ephemeral-4jb6x-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","skuName":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
... skipping 53 lines ...
I0509 15:38:05.213429       1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662 lun 2 to node k8s-agentpool1-35373899-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662  false 2})]
I0509 15:38:05.213449       1 azure_vmss_cache.go:332] Node k8s-agentpool1-35373899-vmss000001 has joined the cluster since the last VM cache refresh, refreshing the cache
I0509 15:38:05.242380       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:38:05.242520       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662  false 2})])
I0509 15:38:05.291261       1 azure_backoff.go:101] VirtualMachinesClient.List(kubetest-rxirza6l) success
I0509 15:38:05.291311       1 azure_controller_vmss.go:109] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b1f5193d-415d-4e6e-91e9-f73b174a132e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b1f5193d-415d-4e6e-91e9-f73b174a132e  false 1})])
I0509 15:38:05.335831       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-dc6c2f9b-8311-414b-88d3-d0c154c09979:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dc6c2f9b-8311-414b-88d3-d0c154c09979  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:38:05.422780       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662  false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:38:05.464261       1 azure_controller_vmss.go:121] azureDisk - update(kubetest-rxirza6l): vm(k8s-agentpool1-35373899-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-rxirza6l/providers/microsoft.compute/disks/pvc-b1f5193d-415d-4e6e-91e9-f73b174a132e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b1f5193d-415d-4e6e-91e9-f73b174a132e  false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING)
I0509 15:38:20.457864       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dc6c2f9b-8311-414b-88d3-d0c154c09979 attached to node k8s-agentpool1-35373899-vmss000000.
I0509 15:38:20.457915       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dc6c2f9b-8311-414b-88d3-d0c154c09979 to node k8s-agentpool1-35373899-vmss000000 successfully
I0509 15:38:20.457953       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.580239758 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rxirza6l" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-dc6c2f9b-8311-414b-88d3-d0c154c09979" node="k8s-agentpool1-35373899-vmss000000" result_code="succeeded"
I0509 15:38:20.457965       1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}}
I0509 15:38:20.579010       1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662 attached to node k8s-agentpool1-35373899-vmss000001.
I0509 15:38:20.579049       1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rxirza6l/providers/Microsoft.Compute/disks/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662 to node k8s-agentpool1-35373899-vmss000001 successfully
... skipping 29 lines ...
Platform: linux/amd64
Topology Key: topology.test.csi.azure.com/zone

Streaming logs below:
I0509 14:35:04.339008       1 azuredisk.go:168] driver userAgent: test.csi.azure.com/v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d gc/go1.18.1 (amd64-linux) e2e-test
I0509 14:35:04.339492       1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider
W0509 14:35:04.358397       1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0509 14:35:04.358424       1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider
I0509 14:35:04.358432       1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0509 14:35:04.358460       1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully
I0509 14:35:04.360669       1 azure_auth.go:245] Using AzurePublicCloud environment
I0509 14:35:04.360712       1 azure_auth.go:96] azure: using managed identity extension to retrieve access token
I0509 14:35:04.360723       1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token
I0509 14:35:04.360796       1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for c191756c-7302-4f68-9385-ab9a686214e3. Invalid resource Id format
I0509 14:35:04.360858       1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
I0509 14:35:04.360938       1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:04.360948       1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:04.360962       1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:04.360967       1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:04.361005       1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20
... skipping 2467 lines ...
Platform: linux/amd64
Topology Key: topology.test.csi.azure.com/zone

Streaming logs below:
I0509 14:35:08.165458       1 azuredisk.go:168] driver userAgent: test.csi.azure.com/v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d gc/go1.18.1 (amd64-linux) e2e-test
I0509 14:35:08.165845       1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider
W0509 14:35:08.195055       1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0509 14:35:08.195075       1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider
I0509 14:35:08.195098       1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0509 14:35:08.195133       1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully
I0509 14:35:08.195876       1 azure_auth.go:245] Using AzurePublicCloud environment
I0509 14:35:08.195917       1 azure_auth.go:96] azure: using managed identity extension to retrieve access token
I0509 14:35:08.195930       1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token
I0509 14:35:08.195954       1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for c191756c-7302-4f68-9385-ab9a686214e3. Invalid resource Id format
I0509 14:35:08.196041       1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
I0509 14:35:08.196083       1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:08.196101       1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:08.196121       1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:08.196132       1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:08.196152       1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20
... skipping 2907 lines ...
Platform: linux/amd64
Topology Key: topology.test.csi.azure.com/zone

Streaming logs below:
I0509 14:35:11.248374       1 azuredisk.go:168] driver userAgent: test.csi.azure.com/v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d gc/go1.18.1 (amd64-linux) e2e-test
I0509 14:35:11.248934       1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider
W0509 14:35:11.281861       1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0509 14:35:11.281883       1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider
I0509 14:35:11.281894       1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0509 14:35:11.281923       1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully
I0509 14:35:11.282658       1 azure_auth.go:245] Using AzurePublicCloud environment
I0509 14:35:11.282691       1 azure_auth.go:96] azure: using managed identity extension to retrieve access token
I0509 14:35:11.282699       1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token
I0509 14:35:11.282744       1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for c191756c-7302-4f68-9385-ab9a686214e3. Invalid resource Id format
I0509 14:35:11.282795       1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
I0509 14:35:11.282872       1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:11.282891       1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:11.282921       1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:11.282968       1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:11.283015       1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20
... skipping 63 lines ...
Platform: linux/amd64
Topology Key: topology.test.csi.azure.com/zone

Streaming logs below:
I0509 14:35:05.890153       1 azuredisk.go:168] driver userAgent: test.csi.azure.com/v1.18.0-75d73be167fd80191bedf5b1785eae6fb32bab5d gc/go1.18.1 (amd64-linux) e2e-test
I0509 14:35:05.890629       1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider
W0509 14:35:05.920790       1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0509 14:35:05.920824       1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider
I0509 14:35:05.920838       1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0509 14:35:05.920880       1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully
I0509 14:35:05.923061       1 azure_auth.go:245] Using AzurePublicCloud environment
I0509 14:35:05.923117       1 azure_auth.go:96] azure: using managed identity extension to retrieve access token
I0509 14:35:05.923125       1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token
I0509 14:35:05.923188       1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for c191756c-7302-4f68-9385-ab9a686214e3. Invalid resource Id format
I0509 14:35:05.923268       1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
I0509 14:35:05.923327       1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:05.923335       1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:05.923349       1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20
I0509 14:35:05.923355       1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0509 14:35:05.923378       1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20
... skipping 3377 lines ...
I0509 15:38:22.516872       1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662/globalmount /var/lib/kubelet/pods/bbbf7965-ce49-414b-b089-c218ba914850/volumes/kubernetes.io~csi/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662/mount)
I0509 15:38:22.518320       1 nodeserver.go:286] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662/globalmount at /var/lib/kubelet/pods/bbbf7965-ce49-414b-b089-c218ba914850/volumes/kubernetes.io~csi/pvc-b9addfdf-2c51-4c64-83ba-9a1216d59662/mount successfully
I0509 15:38:22.518353       1 utils.go:84] GRPC response: {}
print out csi-test-node-win logs ...
======================================================================================
No resources found in kube-system namespace.
make: *** [Makefile:260: e2e-test] Error 1
2022/05/09 15:38:36 process.go:155: Step 'make e2e-test' finished in 1h15m2.422758308s
2022/05/09 15:38:36 aksengine_helpers.go:426: downloading /root/tmp487086944/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh
2022/05/09 15:38:36 util.go:71: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh
2022/05/09 15:38:36 process.go:153: Running: chmod +x /root/tmp487086944/log-dump.sh
2022/05/09 15:38:36 process.go:155: Step 'chmod +x /root/tmp487086944/log-dump.sh' finished in 1.69161ms
2022/05/09 15:38:36 aksengine_helpers.go:426: downloading /root/tmp487086944/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml
... skipping 64 lines ...
ssh key file /root/.ssh/id_rsa does not exist. Exiting.
2022/05/09 15:39:54 process.go:155: Step 'bash -c /root/tmp487086944/win-ci-logs-collector.sh kubetest-rxirza6l.westeurope.cloudapp.azure.com /root/tmp487086944 /root/.ssh/id_rsa' finished in 4.727542ms
2022/05/09 15:39:54 aksengine.go:1141: Deleting resource group: kubetest-rxirza6l.
2022/05/09 15:48:07 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/05/09 15:48:07 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
2022/05/09 15:48:07 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 322.375702ms
2022/05/09 15:48:07 main.go:331: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2]
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
540ea2e4d812
... skipping 4 lines ...