This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmukhoakash: [V2] fix: Fix for intermittent failure for NodeStageVolume request
ResultABORTED
Tests 0 failed / 14 succeeded
Started2022-06-24 21:19
Elapsed49m23s
Revision8ae2cee43bd91d2a602aafb1c4bc4da73576fd8b
Refs 1395

No Test Failures!


Show 14 Passed Tests

Show 93 Skipped Tests

Error lines from build-log.txt

... skipping 94 lines ...

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
 94 11156   94 10505    0     0   162k      0 --:--:-- --:--:-- --:--:--  160k
100 11156  100 11156    0     0   170k      0 --:--:-- --:--:-- --:--:--  167k
Downloading https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
docker pull k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 || make container-all push-manifest
Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 not found: manifest unknown: manifest tagged by "latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8" is not found
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver'
CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=5f5939f86db107e671b4778e00fd0672597e49a8 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2022-06-24T21:26:27Z -extldflags "-static"" -tags azurediskv2 -mod vendor -o _output/amd64/azurediskpluginv2.exe ./pkg/azurediskplugin
docker buildx rm container-builder || true
error: no builder "container-builder" found
docker buildx create --use --name=container-builder
container-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
... skipping 701 lines ...
         }
      }
   ]
}
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver'
docker pull k8sprow.azurecr.io/azdiskschedulerextender-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 || make azdiskschedulerextender-all push-manifest-azdiskschedulerextender
Error response from daemon: manifest for k8sprow.azurecr.io/azdiskschedulerextender-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 not found: manifest unknown: manifest tagged by "latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8" is not found
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver'
docker buildx rm container-builder || true
docker buildx create --use --name=container-builder
container-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
... skipping 856 lines ...
                    type: string
                type: object
                oneOf:
                - required: ["persistentVolumeClaimName"]
                - required: ["volumeSnapshotContentName"]
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 60 lines ...
                    type: string
                  volumeSnapshotContentName:
                    description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.
                    type: string
                type: object
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 254 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 108 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 244 lines ...
            - volumeName
            - volume_context
            - volume_id
            type: object
          status:
            description: status represents the current state of AzVolumeAttachment.
              includes error, state, and attachment status
            properties:
              annotation:
                additionalProperties:
                  type: string
                description: Annotations contains additional resource information
                  to guide driver actions
... skipping 13 lines ...
                  role:
                    description: The current attachment role.
                    type: string
                required:
                - role
                type: object
              error:
                description: Error occurred during attach/detach of volume
                properties:
                  code:
                    type: string
                  message:
                    type: string
                  parameters:
... skipping 173 lines ...
            - maxMountReplicaCount
            - volumeCapability
            - volumeName
            type: object
          status:
            description: status represents the current state of AzVolume. includes
              error, state, and volume status
            properties:
              annotation:
                additionalProperties:
                  type: string
                description: Annotations contains additional resource information
                  to guide driver actions
... skipping 34 lines ...
                    type: string
                required:
                - capacity_bytes
                - node_expansion_required
                - volume_id
                type: object
              error:
                description: Error occurred during creation/deletion of volume
                properties:
                  code:
                    type: string
                  message:
                    type: string
                  parameters:
... skipping 1061 lines ...
          image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0"
          args:
            - "-csi-address=$(ADDRESS)"
            - "-v=2"
            - "-leader-election"
            - "--leader-election-namespace=kube-system"
            - '-handle-volume-inuse-error=false'
            - '-feature-gates=RecoverVolumeExpansionFailure=true'
            - "-timeout=240s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
... skipping 430 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 24 21:35:06.669: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-p69vg" in namespace "azuredisk-8655" to be "Succeeded or Failed"
Jun 24 21:35:06.704: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 35.397404ms
Jun 24 21:35:08.740: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070906028s
Jun 24 21:35:10.775: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106039624s
Jun 24 21:35:12.813: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144336601s
Jun 24 21:35:14.848: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179020519s
Jun 24 21:35:16.882: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213397949s
... skipping 2 lines ...
Jun 24 21:35:22.988: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.319585765s
Jun 24 21:35:25.024: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.355319078s
Jun 24 21:35:27.061: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.39188066s
Jun 24 21:35:29.096: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.427188355s
Jun 24 21:35:31.133: INFO: Pod "azuredisk-volume-tester-p69vg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.464143539s
STEP: Saw pod success
Jun 24 21:35:31.133: INFO: Pod "azuredisk-volume-tester-p69vg" satisfied condition "Succeeded or Failed"
Jun 24 21:35:31.133: INFO: deleting Pod "azuredisk-8655"/"azuredisk-volume-tester-p69vg"
Jun 24 21:35:31.206: INFO: Pod azuredisk-volume-tester-p69vg has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-p69vg in namespace azuredisk-8655
STEP: validating provisioned PV
STEP: checking the PV
... skipping 126 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Jun 24 21:36:32.870: INFO: deleting Pod "azuredisk-4268"/"azuredisk-volume-tester-9pn2d"
Jun 24 21:36:32.906: INFO: Error getting logs for pod azuredisk-volume-tester-9pn2d: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-9pn2d)
STEP: Deleting pod azuredisk-volume-tester-9pn2d in namespace azuredisk-4268
STEP: validating provisioned PV
STEP: checking the PV
Jun 24 21:36:33.010: INFO: deleting PVC "azuredisk-4268"/"pvc-zkvmv"
Jun 24 21:36:33.010: INFO: Deleting PersistentVolumeClaim "pvc-zkvmv"
STEP: waiting for claim's PV "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" to be deleted
... skipping 58 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 24 21:39:00.076: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-fklnn" in namespace "azuredisk-198" to be "Succeeded or Failed"
Jun 24 21:39:00.112: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 36.39419ms
Jun 24 21:39:02.146: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070871267s
Jun 24 21:39:04.181: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105690157s
Jun 24 21:39:06.215: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139477771s
Jun 24 21:39:08.250: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174073436s
Jun 24 21:39:10.285: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.209127896s
Jun 24 21:39:12.319: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.243540106s
Jun 24 21:39:14.354: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.278504841s
Jun 24 21:39:16.389: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.313289924s
Jun 24 21:39:18.423: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.347654542s
Jun 24 21:39:20.458: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.382383023s
Jun 24 21:39:22.493: INFO: Pod "azuredisk-volume-tester-fklnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.416979211s
STEP: Saw pod success
Jun 24 21:39:22.493: INFO: Pod "azuredisk-volume-tester-fklnn" satisfied condition "Succeeded or Failed"
Jun 24 21:39:22.493: INFO: deleting Pod "azuredisk-198"/"azuredisk-volume-tester-fklnn"
Jun 24 21:39:22.555: INFO: Pod azuredisk-volume-tester-fklnn has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-fklnn in namespace azuredisk-198
STEP: validating provisioned PV
STEP: checking the PV
... skipping 40 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Jun 24 21:40:03.994: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-hgzp9" in namespace "azuredisk-4115" to be "Error status code"
Jun 24 21:40:04.034: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.128032ms
Jun 24 21:40:06.069: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074520605s
Jun 24 21:40:08.104: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109516053s
Jun 24 21:40:10.138: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144167629s
Jun 24 21:40:12.174: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179528058s
Jun 24 21:40:14.215: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221229385s
Jun 24 21:40:16.251: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.256441403s
Jun 24 21:40:18.287: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.292495071s
Jun 24 21:40:20.323: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.328292545s
Jun 24 21:40:22.358: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.363824153s
Jun 24 21:40:24.393: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.399061399s
Jun 24 21:40:26.428: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.433632511s
Jun 24 21:40:28.463: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.46910111s
Jun 24 21:40:30.499: INFO: Pod "azuredisk-volume-tester-hgzp9": Phase="Failed", Reason="", readiness=false. Elapsed: 26.50452169s
STEP: Saw pod failure
Jun 24 21:40:30.499: INFO: Pod "azuredisk-volume-tester-hgzp9" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Jun 24 21:40:30.537: INFO: deleting Pod "azuredisk-4115"/"azuredisk-volume-tester-hgzp9"
Jun 24 21:40:30.574: INFO: Pod azuredisk-volume-tester-hgzp9 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-hgzp9 in namespace azuredisk-4115
STEP: validating provisioned PV
... skipping 385 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 24 21:48:58.825: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ccbz5" in namespace "azuredisk-2035" to be "Succeeded or Failed"
Jun 24 21:48:58.877: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 51.792889ms
Jun 24 21:49:00.913: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088273837s
Jun 24 21:49:02.951: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125933061s
Jun 24 21:49:04.987: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162522229s
Jun 24 21:49:07.022: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197491905s
Jun 24 21:49:09.060: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.235285061s
... skipping 4 lines ...
Jun 24 21:49:19.245: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.419733491s
Jun 24 21:49:21.280: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.455177618s
Jun 24 21:49:23.316: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.491124781s
Jun 24 21:49:25.351: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.525737881s
Jun 24 21:49:27.386: INFO: Pod "azuredisk-volume-tester-ccbz5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.561434142s
STEP: Saw pod success
Jun 24 21:49:27.387: INFO: Pod "azuredisk-volume-tester-ccbz5" satisfied condition "Succeeded or Failed"
Jun 24 21:49:27.387: INFO: deleting Pod "azuredisk-2035"/"azuredisk-volume-tester-ccbz5"
Jun 24 21:49:27.457: INFO: Pod azuredisk-volume-tester-ccbz5 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-ccbz5 in namespace azuredisk-2035
... skipping 70 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 24 21:50:24.659: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hzjzv" in namespace "azuredisk-5351" to be "Succeeded or Failed"
Jun 24 21:50:24.709: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 49.976242ms
Jun 24 21:50:26.743: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084820675s
Jun 24 21:50:28.777: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118530464s
Jun 24 21:50:30.812: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152952229s
Jun 24 21:50:32.846: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18757875s
Jun 24 21:50:34.882: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222988652s
... skipping 3 lines ...
Jun 24 21:50:43.020: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.361614302s
Jun 24 21:50:45.055: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.396198583s
Jun 24 21:50:47.090: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.431262171s
Jun 24 21:50:49.125: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Pending", Reason="", readiness=false. Elapsed: 24.466037403s
Jun 24 21:50:51.160: INFO: Pod "azuredisk-volume-tester-hzjzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.501094082s
STEP: Saw pod success
Jun 24 21:50:51.160: INFO: Pod "azuredisk-volume-tester-hzjzv" satisfied condition "Succeeded or Failed"
Jun 24 21:50:51.160: INFO: deleting Pod "azuredisk-5351"/"azuredisk-volume-tester-hzjzv"
Jun 24 21:50:51.197: INFO: Pod azuredisk-volume-tester-hzjzv has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.065710 seconds, 1.5GB/s
hello world

... skipping 118 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 24 21:51:44.480: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-8qsf4" in namespace "azuredisk-2681" to be "Succeeded or Failed"
Jun 24 21:51:44.516: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.19363ms
Jun 24 21:51:46.551: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071245922s
Jun 24 21:51:48.587: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106886259s
Jun 24 21:51:50.622: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142005724s
Jun 24 21:51:52.657: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177407321s
Jun 24 21:51:54.693: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213330041s
... skipping 3 lines ...
Jun 24 21:52:02.833: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.352859903s
Jun 24 21:52:04.867: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.387198708s
Jun 24 21:52:06.902: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.422075927s
Jun 24 21:52:08.937: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.456931473s
Jun 24 21:52:10.972: INFO: Pod "azuredisk-volume-tester-8qsf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.492053747s
STEP: Saw pod success
Jun 24 21:52:10.972: INFO: Pod "azuredisk-volume-tester-8qsf4" satisfied condition "Succeeded or Failed"
Jun 24 21:52:10.972: INFO: deleting Pod "azuredisk-2681"/"azuredisk-volume-tester-8qsf4"
Jun 24 21:52:11.010: INFO: Pod azuredisk-volume-tester-8qsf4 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-8qsf4 in namespace azuredisk-2681
STEP: validating provisioned PV
STEP: checking the PV
... skipping 691 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/utils/testutil/env_utils.go:32
------------------------------
Dynamic Provisioning [single-az] 
  should check failed replica attachments are recreated after space is made from a volume detaching.
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:1764
STEP: Creating a kubernetes client
Jun 24 21:59:24.568: INFO: >>> kubeConfig: /root/tmp1826074777/kubeconfig/kubeconfig.canadacentral.json
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 7 lines ...

S [SKIPPING] [0.642 seconds]
Dynamic Provisioning
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:40
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:44
    should check failed replica attachments are recreated after space is made from a volume detaching. [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:1764

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/utils/testutil/env_utils.go:32
------------------------------
... skipping 177 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
I0624 21:59:29.095710   15074 azuredisk_driver.go:52] Using azure disk driver: kubernetes.io/azure-disk
STEP: Successfully provisioned AzureDisk volume: "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume"

STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Jun 24 21:59:33.264: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mp5mn" in namespace "azuredisk-6252" to be "Succeeded or Failed"
Jun 24 21:59:33.302: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 37.693233ms
Jun 24 21:59:35.338: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073007997s
Jun 24 21:59:37.374: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108995284s
Jun 24 21:59:39.408: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143505794s
Jun 24 21:59:41.443: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17820309s
Jun 24 21:59:43.478: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213790204s
Jun 24 21:59:45.516: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.251589793s
Jun 24 21:59:47.554: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.289896319s
Jun 24 21:59:49.592: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.327391857s
Jun 24 21:59:51.627: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.362598064s
Jun 24 21:59:53.662: INFO: Pod "azuredisk-volume-tester-mp5mn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.397325422s
STEP: Saw pod success
Jun 24 21:59:53.662: INFO: Pod "azuredisk-volume-tester-mp5mn" satisfied condition "Succeeded or Failed"
Jun 24 21:59:53.662: INFO: deleting Pod "azuredisk-6252"/"azuredisk-volume-tester-mp5mn"
Jun 24 21:59:53.724: INFO: Pod azuredisk-volume-tester-mp5mn has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-mp5mn in namespace azuredisk-6252
Jun 24 21:59:53.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-6252" for this suite.
... skipping 218 lines ...
I0624 21:22:37.178815       1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=6, bucket=20
I0624 21:22:37.178822       1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000
I0624 21:22:37.178829       1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=6, bucket=20
I0624 21:22:37.178837       1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000
I0624 21:22:37.178871       1 azure.go:742] Setting up informers for Azure cloud provider
I0624 21:22:37.179793       1 shared_informer.go:240] Waiting for caches to sync for tokens
W0624 21:22:37.237622       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0624 21:22:37.237657       1 controllermanager.go:576] Starting "ttl"
I0624 21:22:37.250553       1 controllermanager.go:605] Started "ttl"
I0624 21:22:37.250574       1 controllermanager.go:576] Starting "attachdetach"
I0624 21:22:37.250708       1 ttl_controller.go:121] Starting TTL controller
I0624 21:22:37.250721       1 shared_informer.go:240] Waiting for caches to sync for TTL
I0624 21:22:37.250731       1 shared_informer.go:247] Caches are synced for TTL 
... skipping 9 lines ...
I0624 21:22:37.285049       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0624 21:22:37.285059       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0624 21:22:37.285067       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0624 21:22:37.285092       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0624 21:22:37.285233       1 controllermanager.go:605] Started "attachdetach"
I0624 21:22:37.285244       1 controllermanager.go:576] Starting "ttl-after-finished"
W0624 21:22:37.285408       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-agentpool1-11903559-0" does not exist
W0624 21:22:37.285438       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-11903559-0" does not exist
W0624 21:22:37.285443       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-agentpool1-11903559-1" does not exist
I0624 21:22:37.285465       1 attach_detach_controller.go:328] Starting attach detach controller
I0624 21:22:37.285470       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0624 21:22:37.296088       1 ttl_controller.go:276] "Changed ttl annotation" node="k8s-agentpool1-11903559-0" new_ttl="0s"
I0624 21:22:37.300236       1 ttl_controller.go:276] "Changed ttl annotation" node="k8s-agentpool1-11903559-1" new_ttl="0s"
I0624 21:22:37.302072       1 ttl_controller.go:276] "Changed ttl annotation" node="k8s-master-11903559-0" new_ttl="0s"
I0624 21:22:37.348578       1 controllermanager.go:605] Started "ttl-after-finished"
... skipping 312 lines ...
I0624 21:23:09.553476       1 azure_routes.go:444] CreateRoute: route created. clusterName="kubetest-ybmpahy2" instance="k8s-agentpool1-11903559-1" cidr="10.244.2.0/24"
I0624 21:23:09.553650       1 route_controller.go:214] Created route for node k8s-agentpool1-11903559-1 10.244.2.0/24 with hint 0ea54c65-b6e3-4e53-be94-43e15fd7dccd after 9.015994841s
I0624 21:23:09.553495       1 azure_routes.go:444] CreateRoute: route created. clusterName="kubetest-ybmpahy2" instance="k8s-agentpool1-11903559-0" cidr="10.244.0.0/24"
I0624 21:23:09.553699       1 route_controller.go:214] Created route for node k8s-agentpool1-11903559-0 10.244.0.0/24 with hint 1aedce60-fc65-4eba-a29d-82b8685ad171 after 9.016086841s
I0624 21:23:09.553507       1 azure_routes.go:444] CreateRoute: route created. clusterName="kubetest-ybmpahy2" instance="k8s-master-11903559-0" cidr="10.244.1.0/24"
I0624 21:23:09.553708       1 route_controller.go:214] Created route for node k8s-master-11903559-0 10.244.1.0/24 with hint f170ce15-66cc-4e57-b74f-9cf93cac944b after 9.016096441s
I0624 21:23:09.553729       1 route_controller.go:304] Patching node status k8s-agentpool1-11903559-0 with true previous condition was:&NodeCondition{Type:NetworkUnavailable,Status:True,LastHeartbeatTime:2022-06-24 21:22:40 +0000 UTC,LastTransitionTime:2022-06-24 21:22:40 +0000 UTC,Reason:NoRouteCreated,Message:RouteController failed to create a route,}
I0624 21:23:09.553951       1 route_controller.go:304] Patching node status k8s-master-11903559-0 with true previous condition was:&NodeCondition{Type:NetworkUnavailable,Status:True,LastHeartbeatTime:2022-06-24 21:22:40 +0000 UTC,LastTransitionTime:2022-06-24 21:22:40 +0000 UTC,Reason:NoRouteCreated,Message:RouteController failed to create a route,}
I0624 21:23:09.554294       1 route_controller.go:304] Patching node status k8s-agentpool1-11903559-1 with true previous condition was:&NodeCondition{Type:NetworkUnavailable,Status:True,LastHeartbeatTime:2022-06-24 21:22:40 +0000 UTC,LastTransitionTime:2022-06-24 21:22:40 +0000 UTC,Reason:NoRouteCreated,Message:RouteController failed to create a route,}
I0624 21:23:10.579701       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:10.579728       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:10.579735       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:20.538897       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:20.538901       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:20.538915       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:30.539300       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:30.539324       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:30.539330       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:33.098756       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-7cf78b68d8" need=1 creating=1
I0624 21:23:33.100733       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-7cf78b68d8 to 1"
I0624 21:23:33.129629       1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-cc76d9bff to 1"
I0624 21:23:33.129679       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-cc76d9bff" need=1 creating=1
I0624 21:23:33.159734       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:23:33.226297       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:23:33.227619       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:23:33.254674       1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-cc76d9bff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-cc76d9bff-w5tj9"
I0624 21:23:33.268244       1 event.go:294] "Event occurred" object="kube-system/coredns-7cf78b68d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-7cf78b68d8-pt8n4"
I0624 21:23:34.502734       1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-qz9s4"
I0624 21:23:34.503445       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qdh7s"
I0624 21:23:34.533167       1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-nlsz4"
I0624 21:23:34.535063       1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-xskgz"
I0624 21:23:34.554880       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ggc42"
I0624 21:23:34.570583       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-np59h"
I0624 21:23:35.878611       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5b8986d847 to 1"
I0624 21:23:35.878790       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/metrics-server-5b8986d847" need=1 creating=1
I0624 21:23:35.892879       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5b8986d847" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5b8986d847-jqcjj"
I0624 21:23:35.921363       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:23:35.935140       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:23:40.539872       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:40.539904       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:40.539912       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:50.540662       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:50.540694       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:23:50.540702       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:23:52.245968       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0624 21:23:52.650608       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0624 21:24:00.540695       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:24:00.540695       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:24:00.540711       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:24:10.541518       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:24:10.541533       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:24:10.541545       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 193 lines ...
I0624 21:34:36.362936       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-544744589f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-544744589f-8xzwf"
I0624 21:34:36.365347       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-scheduler-extender" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azuredisk-scheduler-extender-9bdb8968d to 2"
I0624 21:34:36.369062       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azuredisk-controller-6f554768d6 to 2"
I0624 21:34:36.378492       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-scheduler-extender-9bdb8968d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-scheduler-extender-9bdb8968d-j29wn"
I0624 21:34:36.390740       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-544744589f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-544744589f-pr6h4"
I0624 21:34:36.426389       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6f554768d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6f554768d6-fq92d"
I0624 21:34:36.426766       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:34:36.523968       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-scheduler-extender-9bdb8968d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-scheduler-extender-9bdb8968d-fn7w6"
I0624 21:34:36.524019       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6f554768d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6f554768d6-gt66f"
I0624 21:34:36.535210       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:34:36.624555       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azuredisk-scheduler-extender" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-scheduler-extender\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:34:40.581582       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:34:40.582438       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:34:40.582456       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:34:50.582060       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:34:50.582088       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:34:50.582096       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 16 lines ...
I0624 21:35:06.687716       1 event.go:294] "Event occurred" object="azuredisk-8655/pvc-h5nq7" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:35:06.690349       1 event.go:294] "Event occurred" object="azuredisk-8655/pvc-h5nq7" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:35:07.045019       1 event.go:294] "Event occurred" object="azuredisk-8655/pvc-h5nq7" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:35:09.367930       1 pv_controller.go:887] volume "pvc-a0a33972-1201-4df9-913f-917493004642" entered phase "Bound"
I0624 21:35:09.367969       1 pv_controller.go:990] volume "pvc-a0a33972-1201-4df9-913f-917493004642" bound to claim "azuredisk-8655/pvc-h5nq7"
I0624 21:35:09.381347       1 pv_controller.go:831] claim "azuredisk-8655/pvc-h5nq7" entered phase "Bound"
E0624 21:35:09.457809       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8619/default: secrets "default-token-wrbgz" is forbidden: unable to create new content in namespace azuredisk-8619 because it is being terminated
I0624 21:35:09.748326       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-a0a33972-1201-4df9-913f-917493004642" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642") from node "k8s-agentpool1-11903559-1" 
E0624 21:35:10.032242       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4593/default: secrets "default-token-tgfqg" is forbidden: unable to create new content in namespace azuredisk-4593 because it is being terminated
I0624 21:35:10.583344       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:35:10.583385       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:35:10.583411       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:35:10.604795       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9942/default: secrets "default-token-t9l8p" is forbidden: unable to create new content in namespace azuredisk-9942 because it is being terminated
I0624 21:35:11.402908       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-a0a33972-1201-4df9-913f-917493004642" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642") from node "k8s-agentpool1-11903559-1" 
I0624 21:35:11.403102       1 event.go:294] "Event occurred" object="azuredisk-8655/azuredisk-volume-tester-p69vg" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-a0a33972-1201-4df9-913f-917493004642\" "
I0624 21:35:14.535057       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8619
I0624 21:35:14.826452       1 namespace_controller.go:185] Namespace has been deleted azuredisk-2353
I0624 21:35:15.136286       1 namespace_controller.go:185] Namespace has been deleted azuredisk-4593
I0624 21:35:15.409690       1 namespace_controller.go:185] Namespace has been deleted azuredisk-7482
... skipping 17 lines ...
I0624 21:35:50.587108       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:35:50.587126       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:35:50.870531       1 operation_generator.go:528] DetachVolume.Detach succeeded for volume "pvc-a0a33972-1201-4df9-913f-917493004642" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642") on node "k8s-agentpool1-11903559-1" 
I0624 21:36:00.587258       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:00.587258       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:00.587274       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:36:07.787845       1 pv_protection_controller.go:114] PV pvc-a0a33972-1201-4df9-913f-917493004642 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-a0a33972-1201-4df9-913f-917493004642": the object has been modified; please apply your changes to the latest version and try again
I0624 21:36:07.793782       1 pv_controller_base.go:533] deletion of claim "azuredisk-8655/pvc-h5nq7" was already processed
E0624 21:36:07.795817       1 pv_protection_controller.go:114] PV pvc-a0a33972-1201-4df9-913f-917493004642 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-a0a33972-1201-4df9-913f-917493004642": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-a0a33972-1201-4df9-913f-917493004642, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: bfb532f0-9bfb-442e-b8ea-d80c9e0093eb, UID in object meta: 
I0624 21:36:10.746799       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:10.746928       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:10.747212       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:14.758520       1 event.go:294] "Event occurred" object="azuredisk-4268/pvc-zkvmv" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:36:14.805897       1 event.go:294] "Event occurred" object="azuredisk-4268/pvc-zkvmv" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:36:17.328488       1 pv_controller.go:887] volume "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" entered phase "Bound"
I0624 21:36:17.328529       1 pv_controller.go:990] volume "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" bound to claim "azuredisk-4268/pvc-zkvmv"
I0624 21:36:17.338015       1 pv_controller.go:831] claim "azuredisk-4268/pvc-zkvmv" entered phase "Bound"
I0624 21:36:17.888058       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e") from node "k8s-agentpool1-11903559-1" 
E0624 21:36:18.399833       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2570/default: secrets "default-token-hzdpb" is forbidden: unable to create new content in namespace azuredisk-2570 because it is being terminated
E0624 21:36:19.189182       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-394/default: secrets "default-token-f6zf6" is forbidden: unable to create new content in namespace azuredisk-394 because it is being terminated
I0624 21:36:19.564055       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e") from node "k8s-agentpool1-11903559-1" 
I0624 21:36:19.564164       1 event.go:294] "Event occurred" object="azuredisk-4268/azuredisk-volume-tester-9pn2d" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e\" "
I0624 21:36:20.588286       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:20.588977       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:20.588988       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:36:22.098835       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8655
... skipping 52 lines ...
I0624 21:38:40.594312       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:38:40.594319       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:38:50.326115       1 operation_generator.go:528] DetachVolume.Detach succeeded for volume "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e") on node "k8s-agentpool1-11903559-1" 
I0624 21:38:50.594327       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:38:50.594371       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:38:50.594457       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:38:58.665446       1 pv_protection_controller.go:114] PV pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e failed with : Operation cannot be fulfilled on persistentvolumes "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e": the object has been modified; please apply your changes to the latest version and try again
I0624 21:38:58.673802       1 pv_controller_base.go:533] deletion of claim "azuredisk-4268/pvc-zkvmv" was already processed
I0624 21:39:00.022576       1 event.go:294] "Event occurred" object="azuredisk-198/pvc-hpmvt" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:39:00.077437       1 event.go:294] "Event occurred" object="azuredisk-198/pvc-hpmvt" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:39:00.594630       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:39:00.594628       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:39:00.594642       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 22 lines ...
I0624 21:39:40.596753       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:39:40.596760       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:39:45.735942       1 operation_generator.go:528] DetachVolume.Detach succeeded for volume "pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70") on node "k8s-agentpool1-11903559-1" 
I0624 21:39:50.597661       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:39:50.597668       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:39:50.597680       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:39:59.068124       1 pv_protection_controller.go:114] PV pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70": the object has been modified; please apply your changes to the latest version and try again
I0624 21:39:59.074182       1 pv_controller_base.go:533] deletion of claim "azuredisk-198/pvc-hpmvt" was already processed
I0624 21:40:00.598070       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:40:00.598075       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:40:00.598087       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:40:03.941698       1 event.go:294] "Event occurred" object="azuredisk-4115/pvc-zvnt4" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:40:03.993030       1 event.go:294] "Event occurred" object="azuredisk-4115/pvc-zvnt4" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
... skipping 25 lines ...
I0624 21:40:50.600686       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:40:50.600674       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:00.600873       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:00.600913       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:00.600880       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:01.195971       1 operation_generator.go:528] DetachVolume.Detach succeeded for volume "pvc-85e7ca04-47e3-4a07-a750-18643e916680" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680") on node "k8s-agentpool1-11903559-1" 
E0624 21:41:07.361612       1 pv_protection_controller.go:114] PV pvc-85e7ca04-47e3-4a07-a750-18643e916680 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-85e7ca04-47e3-4a07-a750-18643e916680": the object has been modified; please apply your changes to the latest version and try again
I0624 21:41:07.367283       1 pv_controller_base.go:533] deletion of claim "azuredisk-4115/pvc-zvnt4" was already processed
I0624 21:41:10.601440       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:10.601458       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:10.601469       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:41:11.884827       1 event.go:294] "Event occurred" object="azuredisk-4577/pvc-v6fkn" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:41:11.936225       1 event.go:294] "Event occurred" object="azuredisk-4577/pvc-v6fkn" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
... skipping 70 lines ...
I0624 21:43:10.608935       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:43:10.608943       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:43:15.993987       1 operation_generator.go:528] DetachVolume.Detach succeeded for volume "pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31") on node "k8s-agentpool1-11903559-1" 
I0624 21:43:20.609393       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:43:20.609394       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:43:20.609410       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:43:21.891242       1 pv_protection_controller.go:114] PV pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31": the object has been modified; please apply your changes to the latest version and try again
I0624 21:43:21.898686       1 pv_controller_base.go:533] deletion of claim "azuredisk-4577/pvc-92k9j" was already processed
I0624 21:43:23.402213       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-4577/azuredisk-volume-tester-fh66l" PVC="azuredisk-4577/pvc-sb7g7"
I0624 21:43:23.402243       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-4577/pvc-sb7g7"
I0624 21:43:30.609423       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:43:30.609414       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:43:30.609469       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 22 lines ...
I0624 21:44:20.611916       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:20.611918       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:20.611969       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:30.612522       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:30.612558       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:30.612541       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:44:30.971315       1 pv_protection_controller.go:114] PV pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c failed with : Operation cannot be fulfilled on persistentvolumes "pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c": the object has been modified; please apply your changes to the latest version and try again
I0624 21:44:30.978029       1 pv_controller_base.go:533] deletion of claim "azuredisk-4577/pvc-sb7g7" was already processed
I0624 21:44:34.235180       1 pvc_protection_controller.go:281] "Pod uses PVC" pod="azuredisk-4577/azuredisk-volume-tester-gscdg" PVC="azuredisk-4577/pvc-v6fkn"
I0624 21:44:34.235210       1 pvc_protection_controller.go:174] "Keeping PVC because it is being used" PVC="azuredisk-4577/pvc-v6fkn"
I0624 21:44:40.612803       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:40.612805       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:44:40.612818       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 22 lines ...
I0624 21:45:30.617196       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:30.617227       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:30.617206       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:40.617495       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:40.617498       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:40.617511       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:45:41.997471       1 pv_protection_controller.go:114] PV pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63": the object has been modified; please apply your changes to the latest version and try again
I0624 21:45:42.002836       1 pv_controller_base.go:533] deletion of claim "azuredisk-4577/pvc-v6fkn" was already processed
I0624 21:45:45.698652       1 event.go:294] "Event occurred" object="azuredisk-1089/pvc-hnplq" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:45:45.739300       1 event.go:294] "Event occurred" object="azuredisk-1089/azuredisk-volume-tester-g4cg5" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azuredisk-volume-tester-g4cg5-7595f576f6 to 1"
I0624 21:45:45.741625       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6" need=1 creating=1
I0624 21:45:45.752633       1 event.go:294] "Event occurred" object="azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-g4cg5-7595f576f6-8fkxm"
I0624 21:45:45.752997       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1089/azuredisk-volume-tester-g4cg5" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-g4cg5\": the object has been modified; please apply your changes to the latest version and try again"
I0624 21:45:45.775740       1 event.go:294] "Event occurred" object="azuredisk-1089/pvc-hnplq" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:45:48.705256       1 pv_controller.go:887] volume "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" entered phase "Bound"
I0624 21:45:48.705295       1 pv_controller.go:990] volume "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" bound to claim "azuredisk-1089/pvc-hnplq"
I0624 21:45:48.713453       1 pv_controller.go:831] claim "azuredisk-1089/pvc-hnplq" entered phase "Bound"
I0624 21:45:48.794187       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f") from node "k8s-agentpool1-11903559-1" 
E0624 21:45:50.015931       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4577/default: secrets "default-token-gzz7l" is forbidden: unable to create new content in namespace azuredisk-4577 because it is being terminated
I0624 21:45:50.388853       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f") from node "k8s-agentpool1-11903559-1" 
I0624 21:45:50.389142       1 event.go:294] "Event occurred" object="azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6-8fkxm" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f\" "
I0624 21:45:50.618492       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:50.618497       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:50.618509       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:45:55.266585       1 namespace_controller.go:185] Namespace has been deleted azuredisk-4577
I0624 21:46:00.619399       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:00.619415       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:00.619400       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:06.712954       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6" need=1 creating=1
I0624 21:46:06.719653       1 event.go:294] "Event occurred" object="azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-g4cg5-7595f576f6-jhqql"
W0624 21:46:06.820010       1 reconciler.go:385] Multi-Attach error for volume "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f") from node "k8s-agentpool1-11903559-0" Volume is already used by pods azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6-8fkxm on node k8s-agentpool1-11903559-1
I0624 21:46:06.820069       1 event.go:294] "Event occurred" object="azuredisk-1089/azuredisk-volume-tester-g4cg5-7595f576f6-jhqql" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f\" Volume is already used by pod(s) azuredisk-volume-tester-g4cg5-7595f576f6-8fkxm"
I0624 21:46:10.619502       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:10.619501       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:10.619516       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:20.620244       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:20.620258       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:46:20.620288       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 56 lines ...
I0624 21:48:20.628181       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:20.628225       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:20.628241       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:30.628771       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:30.628779       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:30.628904       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:48:35.362880       1 pv_protection_controller.go:114] PV pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f failed with : Operation cannot be fulfilled on persistentvolumes "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f": the object has been modified; please apply your changes to the latest version and try again
I0624 21:48:35.368147       1 pv_controller_base.go:533] deletion of claim "azuredisk-1089/pvc-hnplq" was already processed
I0624 21:48:40.000294       1 event.go:294] "Event occurred" object="azuredisk-2902/pvc-9z2k9" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:48:40.629734       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:40.629735       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:40.629766       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:48:42.520517       1 pv_controller.go:887] volume "pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" entered phase "Bound"
I0624 21:48:42.520551       1 pv_controller.go:990] volume "pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" bound to claim "azuredisk-2902/pvc-9z2k9"
I0624 21:48:42.528732       1 pv_controller.go:831] claim "azuredisk-2902/pvc-9z2k9" entered phase "Bound"
E0624 21:48:44.026873       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1089/default: secrets "default-token-66n85" is forbidden: unable to create new content in namespace azuredisk-1089 because it is being terminated
I0624 21:48:44.222368       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-2902/pvc-9z2k9"
I0624 21:48:44.231725       1 pv_controller.go:648] volume "pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" is released and reclaim policy "Delete" will be executed
I0624 21:48:44.234972       1 pv_controller.go:887] volume "pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" entered phase "Released"
I0624 21:48:49.082475       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1089
I0624 21:48:49.559157       1 pv_controller_base.go:533] deletion of claim "azuredisk-2902/pvc-9z2k9" was already processed
I0624 21:48:50.630321       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 2 lines ...
I0624 21:48:58.627993       1 event.go:294] "Event occurred" object="azuredisk-2035/pvc-vc9pt" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:48:58.699631       1 event.go:294] "Event occurred" object="azuredisk-2035/pvc-9rmq5" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:48:58.771269       1 event.go:294] "Event occurred" object="azuredisk-2035/pvc-5ppfp" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:48:58.824715       1 event.go:294] "Event occurred" object="azuredisk-2035/pvc-vc9pt" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:48:58.848162       1 event.go:294] "Event occurred" object="azuredisk-2035/pvc-9rmq5" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:48:58.896114       1 event.go:294] "Event occurred" object="azuredisk-2035/pvc-5ppfp" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E0624 21:48:59.643363       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2902/default: secrets "default-token-w7l4x" is forbidden: unable to create new content in namespace azuredisk-2902 because it is being terminated
I0624 21:49:00.631319       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:49:00.631341       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:49:00.631347       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:49:00.652543       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1755/default: secrets "default-token-fs67w" is forbidden: unable to create new content in namespace azuredisk-1755 because it is being terminated
I0624 21:49:01.368955       1 pv_controller.go:887] volume "pvc-702936c8-510b-416e-ae83-ef3b3dd48539" entered phase "Bound"
I0624 21:49:01.370494       1 pv_controller.go:990] volume "pvc-702936c8-510b-416e-ae83-ef3b3dd48539" bound to claim "azuredisk-2035/pvc-vc9pt"
I0624 21:49:01.380514       1 pv_controller.go:831] claim "azuredisk-2035/pvc-vc9pt" entered phase "Bound"
I0624 21:49:01.409421       1 pv_controller.go:887] volume "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" entered phase "Bound"
I0624 21:49:01.409641       1 pv_controller.go:990] volume "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" bound to claim "azuredisk-2035/pvc-9rmq5"
I0624 21:49:01.423265       1 pv_controller.go:831] claim "azuredisk-2035/pvc-9rmq5" entered phase "Bound"
E0624 21:49:01.638914       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3540/default: secrets "default-token-sbbmv" is forbidden: unable to create new content in namespace azuredisk-3540 because it is being terminated
I0624 21:49:02.019024       1 pv_controller.go:887] volume "pvc-a4f0efff-7524-436c-b648-c261c57da76f" entered phase "Bound"
I0624 21:49:02.020122       1 pv_controller.go:990] volume "pvc-a4f0efff-7524-436c-b648-c261c57da76f" bound to claim "azuredisk-2035/pvc-5ppfp"
I0624 21:49:02.034972       1 pv_controller.go:831] claim "azuredisk-2035/pvc-5ppfp" entered phase "Bound"
E0624 21:49:02.651443       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2127/default: secrets "default-token-dznf6" is forbidden: unable to create new content in namespace azuredisk-2127 because it is being terminated
I0624 21:49:02.945681       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-702936c8-510b-416e-ae83-ef3b3dd48539" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539") from node "k8s-agentpool1-11903559-1" 
I0624 21:49:02.945812       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41") from node "k8s-agentpool1-11903559-1" 
I0624 21:49:02.945872       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-a4f0efff-7524-436c-b648-c261c57da76f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f") from node "k8s-agentpool1-11903559-1" 
I0624 21:49:04.558723       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-a4f0efff-7524-436c-b648-c261c57da76f" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f") from node "k8s-agentpool1-11903559-1" 
I0624 21:49:04.558818       1 event.go:294] "Event occurred" object="azuredisk-2035/azuredisk-volume-tester-ccbz5" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-a4f0efff-7524-436c-b648-c261c57da76f\" "
I0624 21:49:04.580374       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41") from node "k8s-agentpool1-11903559-1" 
... skipping 35 lines ...
I0624 21:50:00.635150       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:00.635150       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:00.635163       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:03.083939       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-2035/pvc-9rmq5"
I0624 21:50:03.093978       1 pv_controller.go:648] volume "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" is released and reclaim policy "Delete" will be executed
I0624 21:50:03.100483       1 pv_controller.go:887] volume "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" entered phase "Released"
E0624 21:50:08.418393       1 pv_protection_controller.go:114] PV pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41": the object has been modified; please apply your changes to the latest version and try again
I0624 21:50:08.424729       1 pv_controller_base.go:533] deletion of claim "azuredisk-2035/pvc-9rmq5" was already processed
I0624 21:50:10.635330       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:10.635332       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:10.635345       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:13.389287       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-2035/pvc-vc9pt"
I0624 21:50:13.396839       1 pv_controller.go:648] volume "pvc-702936c8-510b-416e-ae83-ef3b3dd48539" is released and reclaim policy "Delete" will be executed
I0624 21:50:13.405487       1 pv_controller.go:887] volume "pvc-702936c8-510b-416e-ae83-ef3b3dd48539" entered phase "Released"
E0624 21:50:18.702341       1 pv_protection_controller.go:114] PV pvc-702936c8-510b-416e-ae83-ef3b3dd48539 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-702936c8-510b-416e-ae83-ef3b3dd48539": the object has been modified; please apply your changes to the latest version and try again
I0624 21:50:18.707926       1 pv_controller_base.go:533] deletion of claim "azuredisk-2035/pvc-vc9pt" was already processed
I0624 21:50:20.635758       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:20.635782       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:20.635789       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:50:24.532203       1 event.go:294] "Event occurred" object="azuredisk-5351/pvc-cvtm8" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:50:24.606244       1 event.go:294] "Event occurred" object="azuredisk-5351/pvc-4d5k5" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
... skipping 38 lines ...
I0624 21:51:10.639139       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:10.639142       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:10.639156       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:20.708105       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:20.708118       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:20.708129       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:51:27.688482       1 pv_protection_controller.go:114] PV pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b failed with : Operation cannot be fulfilled on persistentvolumes "pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b": the object has been modified; please apply your changes to the latest version and try again
I0624 21:51:27.699585       1 pv_controller_base.go:533] deletion of claim "azuredisk-5351/pvc-4d5k5" was already processed
I0624 21:51:30.639517       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:30.639534       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:30.639545       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:31.849510       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-5351/pvc-cvtm8"
I0624 21:51:31.880289       1 pv_controller.go:648] volume "pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" is released and reclaim policy "Delete" will be executed
I0624 21:51:31.894145       1 pv_controller.go:887] volume "pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" entered phase "Released"
E0624 21:51:37.315168       1 pv_protection_controller.go:114] PV pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28": the object has been modified; please apply your changes to the latest version and try again
I0624 21:51:37.321633       1 pv_controller_base.go:533] deletion of claim "azuredisk-5351/pvc-cvtm8" was already processed
I0624 21:51:40.640032       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:40.640032       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:40.640045       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:51:44.288541       1 event.go:294] "Event occurred" object="azuredisk-2681/pvc-jhsks" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:51:44.363380       1 event.go:294] "Event occurred" object="azuredisk-2681/pvc-mzd5l" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
... skipping 9 lines ...
I0624 21:51:47.154458       1 pv_controller.go:887] volume "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592" entered phase "Bound"
I0624 21:51:47.154505       1 pv_controller.go:990] volume "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592" bound to claim "azuredisk-2681/pvc-mzd5l"
I0624 21:51:47.166266       1 pv_controller.go:831] claim "azuredisk-2681/pvc-mzd5l" entered phase "Bound"
I0624 21:51:47.174388       1 pv_controller.go:887] volume "pvc-0a5e4fec-a46c-4716-aaba-5787074aec68" entered phase "Bound"
I0624 21:51:47.174740       1 pv_controller.go:990] volume "pvc-0a5e4fec-a46c-4716-aaba-5787074aec68" bound to claim "azuredisk-2681/pvc-qxv5b"
I0624 21:51:47.219747       1 pv_controller.go:831] claim "azuredisk-2681/pvc-qxv5b" entered phase "Bound"
E0624 21:51:47.277328       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5351/default: secrets "default-token-f5pj7" is forbidden: unable to create new content in namespace azuredisk-5351 because it is being terminated
I0624 21:51:47.579300       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda") from node "k8s-agentpool1-11903559-1" 
I0624 21:51:47.579330       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d6cd746f-cec4-47fc-96f8-6a057f916592") from node "k8s-agentpool1-11903559-1" 
I0624 21:51:47.579345       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-0a5e4fec-a46c-4716-aaba-5787074aec68" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-0a5e4fec-a46c-4716-aaba-5787074aec68") from node "k8s-agentpool1-11903559-1" 
E0624 21:51:47.959014       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1841/default: secrets "default-token-8b8tg" is forbidden: unable to create new content in namespace azuredisk-1841 because it is being terminated
I0624 21:51:49.272472       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d6cd746f-cec4-47fc-96f8-6a057f916592") from node "k8s-agentpool1-11903559-1" 
I0624 21:51:49.272508       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda") from node "k8s-agentpool1-11903559-1" 
I0624 21:51:49.272568       1 event.go:294] "Event occurred" object="azuredisk-2681/azuredisk-volume-tester-8qsf4" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-d6cd746f-cec4-47fc-96f8-6a057f916592\" "
I0624 21:51:49.272584       1 event.go:294] "Event occurred" object="azuredisk-2681/azuredisk-volume-tester-8qsf4" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda\" "
I0624 21:51:49.275876       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-0a5e4fec-a46c-4716-aaba-5787074aec68" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-0a5e4fec-a46c-4716-aaba-5787074aec68") from node "k8s-agentpool1-11903559-1" 
I0624 21:51:49.276202       1 event.go:294] "Event occurred" object="azuredisk-2681/azuredisk-volume-tester-8qsf4" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-0a5e4fec-a46c-4716-aaba-5787074aec68\" "
... skipping 34 lines ...
I0624 21:52:50.645476       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:52:50.645493       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:52:50.645481       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:52:51.651030       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-2681/pvc-mzd5l"
I0624 21:52:51.660729       1 pv_controller.go:648] volume "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592" is released and reclaim policy "Delete" will be executed
I0624 21:52:51.668501       1 pv_controller.go:887] volume "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592" entered phase "Released"
E0624 21:52:57.020963       1 pv_protection_controller.go:114] PV pvc-d6cd746f-cec4-47fc-96f8-6a057f916592 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592": the object has been modified; please apply your changes to the latest version and try again
I0624 21:52:57.027440       1 pv_controller_base.go:533] deletion of claim "azuredisk-2681/pvc-mzd5l" was already processed
E0624 21:52:57.029132       1 pv_protection_controller.go:114] PV pvc-d6cd746f-cec4-47fc-96f8-6a057f916592 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-d6cd746f-cec4-47fc-96f8-6a057f916592": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-d6cd746f-cec4-47fc-96f8-6a057f916592, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 32a91e1b-723a-4b42-a093-4e856a10e437, UID in object meta: 
I0624 21:53:00.646187       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:00.646187       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:00.646222       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:01.935431       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-2681/pvc-jhsks"
I0624 21:53:01.943212       1 pv_controller.go:648] volume "pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda" is released and reclaim policy "Delete" will be executed
I0624 21:53:01.950259       1 pv_controller.go:887] volume "pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda" entered phase "Released"
E0624 21:53:07.346784       1 pv_protection_controller.go:114] PV pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda failed with : Operation cannot be fulfilled on persistentvolumes "pvc-5aff9e58-d93e-40a1-9d0e-595b6c8b4dda": the object has been modified; please apply your changes to the latest version and try again
I0624 21:53:07.352761       1 pv_controller_base.go:533] deletion of claim "azuredisk-2681/pvc-jhsks" was already processed
I0624 21:53:10.646792       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:10.646792       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:10.646815       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:15.754334       1 event.go:294] "Event occurred" object="azuredisk-8009/pvc-dwv2f" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:53:15.806577       1 event.go:294] "Event occurred" object="azuredisk-8009/pvc-dwv2f" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
E0624 21:53:17.421103       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2681/default: secrets "default-token-dtkmt" is forbidden: unable to create new content in namespace azuredisk-2681 because it is being terminated
I0624 21:53:18.335438       1 pv_controller.go:887] volume "pvc-ae793509-ca61-4c49-b32e-53d792922060" entered phase "Bound"
I0624 21:53:18.335478       1 pv_controller.go:990] volume "pvc-ae793509-ca61-4c49-b32e-53d792922060" bound to claim "azuredisk-8009/pvc-dwv2f"
I0624 21:53:18.344336       1 pv_controller.go:831] claim "azuredisk-8009/pvc-dwv2f" entered phase "Bound"
E0624 21:53:18.771111       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8398/default: secrets "default-token-flfl5" is forbidden: unable to create new content in namespace azuredisk-8398 because it is being terminated
I0624 21:53:18.849005       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-ae793509-ca61-4c49-b32e-53d792922060" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ae793509-ca61-4c49-b32e-53d792922060") from node "k8s-agentpool1-11903559-1" 
E0624 21:53:19.496277       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-296/default: secrets "default-token-zc2ws" is forbidden: unable to create new content in namespace azuredisk-296 because it is being terminated
E0624 21:53:20.209408       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8115/default: secrets "default-token-cnfbk" is forbidden: unable to create new content in namespace azuredisk-8115 because it is being terminated
I0624 21:53:20.462446       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-ae793509-ca61-4c49-b32e-53d792922060" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ae793509-ca61-4c49-b32e-53d792922060") from node "k8s-agentpool1-11903559-1" 
I0624 21:53:20.462718       1 event.go:294] "Event occurred" object="azuredisk-8009/azuredisk-volume-tester-z67z6" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-ae793509-ca61-4c49-b32e-53d792922060\" "
I0624 21:53:20.647874       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:20.647917       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:20.647933       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:53:22.509118       1 namespace_controller.go:185] Namespace has been deleted azuredisk-2681
... skipping 28 lines ...
I0624 21:54:36.958147       1 pvc_protection_controller.go:269] "PVC is unused" PVC="azuredisk-8009/pvc-dwv2f"
I0624 21:54:36.977729       1 pv_controller.go:648] volume "pvc-ae793509-ca61-4c49-b32e-53d792922060" is released and reclaim policy "Delete" will be executed
I0624 21:54:36.991526       1 pv_controller.go:887] volume "pvc-ae793509-ca61-4c49-b32e-53d792922060" entered phase "Released"
I0624 21:54:40.650958       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:54:40.650962       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:54:40.650975       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:54:42.323744       1 pv_protection_controller.go:114] PV pvc-ae793509-ca61-4c49-b32e-53d792922060 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-ae793509-ca61-4c49-b32e-53d792922060": the object has been modified; please apply your changes to the latest version and try again
I0624 21:54:42.327062       1 pv_controller_base.go:533] deletion of claim "azuredisk-8009/pvc-dwv2f" was already processed
I0624 21:54:48.033784       1 event.go:294] "Event occurred" object="azuredisk-5756/pvc-x742l" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0624 21:54:48.099542       1 event.go:294] "Event occurred" object="azuredisk-5756/pvc-x742l" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"disk.csi.azure.com\" or manually created by system administrator"
I0624 21:54:50.651746       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:54:50.651816       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:54:50.651831       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:54:50.661733       1 pv_controller.go:887] volume "pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45" entered phase "Bound"
I0624 21:54:50.661906       1 pv_controller.go:990] volume "pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45" bound to claim "azuredisk-5756/pvc-x742l"
I0624 21:54:50.674727       1 pv_controller.go:831] claim "azuredisk-5756/pvc-x742l" entered phase "Bound"
I0624 21:54:51.106757       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45") from node "k8s-agentpool1-11903559-1" 
E0624 21:54:52.528222       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8009/default: secrets "default-token-dn86s" is forbidden: unable to create new content in namespace azuredisk-8009 because it is being terminated
I0624 21:54:52.817972       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45") from node "k8s-agentpool1-11903559-1" 
I0624 21:54:52.818112       1 event.go:294] "Event occurred" object="azuredisk-5756/azuredisk-volume-tester-fn9vx" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-ca4a09fd-21ee-4b4f-971d-e41a2cf38b45\" "
I0624 21:54:57.537306       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8009
I0624 21:55:00.651720       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:55:00.651731       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:55:00.651746       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 62 lines ...
I0624 21:57:10.658399       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:10.658414       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:20.659138       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:20.659172       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:20.659185       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:23.438594       1 event.go:294] "Event occurred" object="azuredisk-7600/azuredisk-volume-tester-mh8jj" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod azuredisk-volume-tester-mh8jj-0 in StatefulSet azuredisk-volume-tester-mh8jj successful"
I0624 21:57:23.465289       1 event.go:294] "Event occurred" object="azuredisk-7600/azuredisk-volume-tester-mh8jj-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a\" Volume is already exclusively attached to one node and can't be attached to another"
W0624 21:57:23.465096       1 reconciler.go:344] Multi-Attach error for volume "pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a") from node "k8s-agentpool1-11903559-0" Volume is already exclusively attached to node k8s-agentpool1-11903559-1 and can't be attached to another
I0624 21:57:30.659990       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:30.660018       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:30.660025       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:57:30.837070       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a") on node "k8s-agentpool1-11903559-1" 
I0624 21:57:30.839794       1 operation_generator.go:1641] Verified volume is safe to detach for volume "pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a") on node "k8s-agentpool1-11903559-1" 
I0624 21:57:40.661079       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
... skipping 44 lines ...
I0624 21:59:00.667063       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:00.667077       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:03.517802       1 operation_generator.go:528] DetachVolume.Detach succeeded for volume "pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a") on node "k8s-agentpool1-11903559-0" 
I0624 21:59:10.667317       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:10.667320       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:10.667332       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:59:14.584764       1 pv_protection_controller.go:114] PV pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a failed with : Operation cannot be fulfilled on persistentvolumes "pvc-590e5cd2-2818-4170-9f48-86afd40c0d3a": the object has been modified; please apply your changes to the latest version and try again
I0624 21:59:14.590050       1 pv_controller_base.go:533] deletion of claim "azuredisk-7600/pvc1-azuredisk-volume-tester-mh8jj-0" was already processed
I0624 21:59:20.667645       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:20.667708       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:20.667719       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
E0624 21:59:22.055806       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7600/default: secrets "default-token-56glz" is forbidden: unable to create new content in namespace azuredisk-7600 because it is being terminated
E0624 21:59:24.853207       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6103/default: secrets "default-token-ph7h9" is forbidden: unable to create new content in namespace azuredisk-6103 because it is being terminated
E0624 21:59:25.577059       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3988/default: secrets "default-token-hwmzl" is forbidden: unable to create new content in namespace azuredisk-3988 because it is being terminated
E0624 21:59:26.300996       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2166/default: secrets "default-token-88mgr" is forbidden: unable to create new content in namespace azuredisk-2166 because it is being terminated
E0624 21:59:26.938880       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-244/default: secrets "default-token-c9vst" is forbidden: unable to create new content in namespace azuredisk-244 because it is being terminated
I0624 21:59:27.231696       1 namespace_controller.go:185] Namespace has been deleted azuredisk-7600
E0624 21:59:27.543276       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5372/default: secrets "default-token-n7rtk" is forbidden: unable to create new content in namespace azuredisk-5372 because it is being terminated
I0624 21:59:27.795877       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8188
E0624 21:59:28.260969       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8583/default: secrets "default-token-t6hp8" is forbidden: unable to create new content in namespace azuredisk-8583 because it is being terminated
I0624 21:59:28.510280       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5312
E0624 21:59:28.976297       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2877/default: secrets "default-token-zwfsl" is forbidden: unable to create new content in namespace azuredisk-2877 because it is being terminated
I0624 21:59:29.223372       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5398
I0624 21:59:29.931339       1 namespace_controller.go:185] Namespace has been deleted azuredisk-6103
E0624 21:59:30.605171       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7893/default: secrets "default-token-mgg5r" is forbidden: unable to create new content in namespace azuredisk-7893 because it is being terminated
I0624 21:59:30.668072       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:30.668092       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:30.668099       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:30.684403       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3988
I0624 21:59:31.387554       1 namespace_controller.go:185] Namespace has been deleted azuredisk-2166
E0624 21:59:31.675149       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1374/default: secrets "default-token-nrcpn" is forbidden: unable to create new content in namespace azuredisk-1374 because it is being terminated
I0624 21:59:32.071344       1 namespace_controller.go:185] Namespace has been deleted azuredisk-244
E0624 21:59:32.206527       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4257/default: secrets "default-token-2vlwb" is forbidden: unable to create new content in namespace azuredisk-4257 because it is being terminated
I0624 21:59:32.660351       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5372
E0624 21:59:33.261110       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5086/default: secrets "default-token-btnq9" is forbidden: unable to create new content in namespace azuredisk-5086 because it is being terminated
I0624 21:59:33.314183       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") from node "k8s-agentpool1-11903559-1" 
I0624 21:59:33.360072       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8583
E0624 21:59:33.789729       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6594/default: secrets "default-token-jm4ln" is forbidden: unable to create new content in namespace azuredisk-6594 because it is being terminated
I0624 21:59:34.056554       1 namespace_controller.go:185] Namespace has been deleted azuredisk-2877
I0624 21:59:34.688156       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8074
I0624 21:59:34.972120       1 operation_generator.go:413] AttachVolume.Attach succeeded for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") from node "k8s-agentpool1-11903559-1" 
I0624 21:59:34.972438       1 event.go:294] "Event occurred" object="azuredisk-6252/azuredisk-volume-tester-mp5mn" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume\" "
I0624 21:59:35.341928       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8316
I0624 21:59:35.666217       1 namespace_controller.go:185] Namespace has been deleted azuredisk-7893
... skipping 8 lines ...
I0624 21:59:40.669068       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:50.669969       1 route_controller.go:295] set node k8s-agentpool1-11903559-1 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:50.669973       1 route_controller.go:295] set node k8s-agentpool1-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:50.669986       1 route_controller.go:295] set node k8s-master-11903559-0 with NodeNetworkUnavailable=false was canceled because it is already set
I0624 21:59:53.786397       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") on node "k8s-agentpool1-11903559-1" 
I0624 21:59:53.789971       1 operation_generator.go:1641] Verified volume is safe to detach for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") on node "k8s-agentpool1-11903559-1" 
E0624 21:59:54.306210       1 csi_attacher.go:727] kubernetes.io/csi: detachment for VolumeAttachment for volume [/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume] failed: rpc error: code = Unknown desc = azvolume.disk.csi.azure.com "pre-provisioned-inline-volume" not found
E0624 21:59:54.306458       1 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume podName: nodeName:}" failed. No retries permitted until 2022-06-24 21:59:54.806431718 +0000 UTC m=+2239.263897105 (durationBeforeRetry 500ms). Error: DetachVolume.Detach failed for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") on node "k8s-agentpool1-11903559-1" : rpc error: code = Unknown desc = azvolume.disk.csi.azure.com "pre-provisioned-inline-volume" not found
I0624 21:59:54.818039       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") on node "k8s-agentpool1-11903559-1" 
I0624 21:59:54.822849       1 operation_generator.go:1641] Verified volume is safe to detach for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") on node "k8s-agentpool1-11903559-1" 
E0624 21:59:55.364098       1 csi_attacher.go:727] kubernetes.io/csi: detachment for VolumeAttachment for volume [/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume] failed: rpc error: code = Unknown desc = azvolume.disk.csi.azure.com "pre-provisioned-inline-volume" not found
E0624 21:59:55.364205       1 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume podName: nodeName:}" failed. No retries permitted until 2022-06-24 21:59:56.364180605 +0000 UTC m=+2240.821645992 (durationBeforeRetry 1s). Error: DetachVolume.Detach failed for volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume" (UniqueName: "kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pre-provisioned-inline-volume") on node "k8s-agentpool1-11903559-1" : rpc error: code = Unknown desc = azvolume.disk.csi.azure.com "pre-provisioned-inline-volume" not found
2022/06/24 21:59:55 ===================================================
2022/06/24 21:59:55 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/06/24 21:59:55 Check successfully
2022/06/24 21:59:55 create example deployments
... skipping 98 lines ...
I0624 21:34:59.333759       1 reflector.go:255] Listing and watching *v1beta2.AzDriverNode from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117
I0624 21:34:59.333768       1 reflector.go:219] Starting reflector *v1beta2.AzVolumeAttachment (30s) from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117
I0624 21:34:59.333776       1 reflector.go:255] Listing and watching *v1beta2.AzVolumeAttachment from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117
I0624 21:34:59.433341       1 shared_informer.go:285] caches populated
I0624 21:34:59.433381       1 azuredisk_v2.go:188] driver userAgent: disk.csi.azure.com/latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 gc/go1.18.3 (amd64-linux) e2e-test
I0624 21:34:59.433403       1 azure_disk_utils.go:474] reading cloud config from secret kube-system/azure-cloud-provider
I0624 21:34:59.438486       1 azure_disk_utils.go:481] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0624 21:34:59.438514       1 azure_disk_utils.go:486] could not read cloud config from secret kube-system/azure-cloud-provider
I0624 21:34:59.438523       1 azure_disk_utils.go:496] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0624 21:34:59.438557       1 azure_disk_utils.go:504] read cloud config from file: /etc/kubernetes/azure.json successfully
I0624 21:34:59.439523       1 azure_auth.go:245] Using AzurePublicCloud environment
I0624 21:34:59.439570       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0624 21:34:59.439593       1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
... skipping 26 lines ...
I0624 21:34:59.439916       1 azure_vmasclient.go:73] Azure AvailabilitySetsClient  (write ops) using rate limit config: QPS=100, bucket=1000
I0624 21:34:59.439971       1 azure.go:1003] attach/detach disk operation rate limit QPS: 1.333333, Bucket: 80
I0624 21:34:59.440009       1 azuredisk_v2.go:214] disable UseInstanceMetadata for controller
I0624 21:34:59.440023       1 azuredisk_v2.go:230] cloud: AzurePublicCloud, location: canadacentral, rg: kubetest-ybmpahy2, VMType: standard, PrimaryScaleSetName: , PrimaryAvailabilitySetName: agentpool1-availabilitySet-11903559, DisableAvailabilitySetNodes: false
I0624 21:34:59.440030       1 skus.go:121] NewNodeInfo: Starting to populate node and disk sku information.
I0624 21:34:59.673596       1 azure_armclient.go:153] Send.sendRequest original response: {
  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
I0624 21:34:59.673619       1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host
I0624 21:34:59.673631       1 azure_vmclient.go:133] Received error in vm.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/virtualMachines/, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {

  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
E0624 21:34:59.673785       1 azure_standard.go:588] as.GetInstanceTypeByNodeName() failed: as.getVirtualMachine() err=Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {

  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
E0624 21:34:59.673811       1 azuredisk_v2.go:238] Failed to get node info. Error: NewNodeInfo: Failed to get instance type from Azure cloud provider, nodeName: , error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {

  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
I0624 21:34:59.673917       1 mount_linux.go:208] Detected OS without systemd
I0624 21:34:59.673931       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
... skipping 72 lines ...
I0624 21:35:00.187894       1 node_availability.go:133] AzDiskControllerManager "msg"="Controller set-up successful." "controller"="nodeavailability" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" 
I0624 21:35:00.188007       1 azuredisk_v2.go:425] Starting controller manager
I0624 21:35:00.188203       1 internal.go:362] AzDiskControllerManager "msg"="Starting server" "addr"={"IP":"::","Port":8090,"Zone":""} "kind"="metrics" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" "path"="/metrics" 
I0624 21:35:00.188335       1 shared_informer.go:285] caches populated
I0624 21:35:00.188399       1 leaderelection.go:248] attempting to acquire leader lease kube-system/csi-azuredisk-controller...
I0624 21:35:00.191068       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:00.191089       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:00.449804       1 utils.go:78] GRPC call: /csi.v1.Identity/GetPluginInfo
I0624 21:35:00.449826       1 utils.go:79] GRPC request: {}
I0624 21:35:00.449902       1 utils.go:85] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8"}
I0624 21:35:00.450904       1 utils.go:78] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0624 21:35:00.450920       1 utils.go:79] GRPC request: {}
I0624 21:35:00.450953       1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}},{"Type":{"VolumeExpansion":{"type":2}}},{"Type":{"VolumeExpansion":{"type":1}}}]}
... skipping 7 lines ...
I0624 21:35:00.646239       1 utils.go:79] GRPC request: {}
I0624 21:35:00.646282       1 utils.go:85] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8"}
I0624 21:35:00.647375       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0624 21:35:00.647390       1 utils.go:79] GRPC request: {}
I0624 21:35:00.647461       1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":13}}}]}
I0624 21:35:03.566363       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:03.566388       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:06.186631       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:06.186657       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:09.801950       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:09.802366       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:14.129457       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:14.129479       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:16.699688       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:16.699712       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:19.919910       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:19.919930       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:24.169247       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:24.169274       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:27.801472       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:27.801502       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:29.364378       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:29.364388       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:29.364452       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:30.088275       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:30.088295       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:30.088325       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:30.357780       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:30.357804       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:32.413215       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:32.413250       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:34.886297       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:34.886322       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:37.607423       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:37.607450       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:39.787957       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:39.787985       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:42.951240       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:42.951264       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:46.248630       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:46.248654       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:49.528018       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:49.528047       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:52.247966       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:52.247992       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:54.303437       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:54.303460       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:57.261841       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:57.261871       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:35:59.365283       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:59.365343       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:59.365295       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:35:59.437572       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:35:59.437599       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:00.089111       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:00.089211       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:00.089119       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:01.478776       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:01.478801       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:05.203520       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:05.203545       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:07.499413       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:07.499439       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:11.435844       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:11.435866       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:14.024075       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:14.024102       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:17.626520       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:17.626545       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:21.998371       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:21.998394       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:25.851552       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:25.851637       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:29.365589       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:29.365609       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:29.365599       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:30.089850       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:30.089874       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:30.089860       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:30.129949       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:30.129973       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:32.600828       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:32.600848       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:36.138623       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:36.138645       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:39.167371       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:39.167393       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:42.664387       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:42.664408       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:46.923142       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:46.923169       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:49.087755       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:49.087854       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:51.625492       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:51.625521       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:54.046624       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:54.046650       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:58.233294       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:36:58.233318       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:36:59.365785       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:59.365795       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:36:59.365806       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:00.090361       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:00.090727       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:00.090807       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:02.607506       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:02.607530       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:05.511042       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:05.511067       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:08.237383       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:08.237407       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:12.541238       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:12.541264       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:15.074484       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:15.074511       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:17.613354       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:17.613382       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:20.852440       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:20.852534       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:23.454284       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:23.454311       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:27.761836       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:27.761864       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:29.365987       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:29.366018       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:29.366054       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:30.090821       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:30.091955       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:30.091969       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:32.044803       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:32.044829       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:36.193867       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:36.193942       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:39.512362       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:39.512385       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:42.479266       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:42.479289       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:45.488240       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:45.488266       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:47.801605       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:47.801629       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:50.584303       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:50.584328       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:54.617922       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:54.617947       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:58.497462       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:37:58.497488       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:37:59.366188       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:59.366293       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:37:59.366347       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:00.091212       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:00.092301       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:00.092430       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:02.513808       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:02.513831       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:06.816802       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:06.816828       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:09.974668       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:09.974692       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:14.364103       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:14.364129       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:18.468203       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:18.468229       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:20.494828       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:20.494854       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:24.069162       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:24.069187       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:28.468286       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:28.468316       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:29.367103       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:29.367128       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:29.367117       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:30.091458       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:30.092527       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:30.092592       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:31.208390       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:31.208415       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:34.175650       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:34.175675       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:36.922513       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:36.922538       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:41.151420       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:41.151450       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:43.798696       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:43.798723       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:46.521295       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:46.521321       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:49.187457       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:49.187483       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:51.844315       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:51.844340       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:55.797817       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:55.797842       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:38:59.367455       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:59.367476       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:59.367465       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:59.662793       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:38:59.662814       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:00.091632       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:00.092929       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:00.092947       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:01.811698       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:01.811722       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:05.033688       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:05.033766       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:09.342488       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:09.342511       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:11.563998       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:11.564021       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:15.482803       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:15.482826       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:18.827196       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:18.827219       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:22.371147       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:22.371176       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:25.615351       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:25.615379       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:29.367650       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:29.367704       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:29.367667       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:29.516656       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:29.516683       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:30.092184       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:30.093312       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:30.093464       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:31.783969       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:31.783994       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:36.044758       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:36.044784       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:38.303419       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:38.303444       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:40.528320       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:40.528346       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:44.538700       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:44.538729       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:47.946470       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:47.946494       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:50.615843       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:50.615869       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:54.294381       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:54.294451       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:56.534657       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:39:56.534685       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:39:59.369850       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:59.369875       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:59.369866       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:00.093359       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:00.093391       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:00.094712       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:00.283252       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:00.283273       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:03.680756       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:03.680781       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:06.290866       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:06.290894       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:10.041314       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:10.041339       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:13.390939       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:13.390965       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:15.566557       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:15.566582       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:18.913060       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:18.913092       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:22.858562       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:22.858591       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:26.744984       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:26.745009       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:29.370159       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:29.370188       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:29.370169       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:30.093652       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:30.093664       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:30.094813       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:31.082402       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:31.082426       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:35.486104       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:35.486130       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:39.212153       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:39.212179       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:42.341594       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:42.341618       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:45.594116       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:45.594143       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:48.538257       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:48.538409       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:50.732164       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:50.732191       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:52.753096       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:52.753122       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:55.939012       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:55.939036       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:40:59.370847       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:59.370868       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:59.370910       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:59.682011       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:40:59.682038       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:00.094277       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:00.094355       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:00.095392       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:01.950808       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:01.950868       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:04.366435       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 43 items received
I0624 21:41:05.645743       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:05.645766       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:08.143087       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:08.143112       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:11.319897       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:11.319921       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:14.747023       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:14.747051       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:18.911743       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:18.911766       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:22.224896       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:22.224918       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:25.822523       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:25.822547       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:28.274763       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:28.274787       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:29.371827       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:29.371961       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:29.372023       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:30.095536       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:30.095542       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:30.095553       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:31.990057       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:31.990083       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:34.560404       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:34.560427       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:37.815534       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:37.815561       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:40.911039       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:40.911077       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:43.965387       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:43.965419       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:47.562426       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:47.562452       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:51.366135       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolume total 41 items received
I0624 21:41:51.968227       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:51.968252       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:56.287681       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:56.287704       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:58.526721       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:41:58.526743       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:41:59.372029       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:59.372051       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:59.372039       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:00.095824       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:00.095848       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:00.095901       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:02.535123       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:02.535148       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:06.447519       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:06.447541       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:08.823934       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:08.823966       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:13.216665       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:13.216688       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:17.026280       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:17.026303       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:20.921721       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:20.921750       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:23.612990       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:23.613014       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:26.345479       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:26.345504       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:29.136239       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:29.136261       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:29.372577       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:29.372613       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:29.372666       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:30.096508       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:30.096546       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:30.096600       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:31.945219       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:31.945246       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:34.771622       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:34.771648       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:37.302690       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:37.302715       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:41.535110       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:41.535137       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:45.398783       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:45.398806       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:46.091182       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 54 items received
I0624 21:42:48.801659       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:48.801686       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:52.916203       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:52.916228       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:55.796377       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:55.796408       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:58.762435       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:42:58.762466       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:42:59.372988       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:59.373008       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:59.373057       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:00.096772       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:00.096789       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:00.096804       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:01.194130       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:01.194152       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:05.327864       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:05.327897       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:09.466250       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:09.466273       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:13.703984       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:13.704008       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:15.930241       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:15.930265       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:18.090923       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolumeAttachment total 65 items received
I0624 21:43:18.351256       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:18.351277       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:21.412829       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:21.412855       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:25.068831       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:25.068854       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:28.902313       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:28.902332       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:29.374002       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:29.374036       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:29.374090       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:30.098148       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:30.098179       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:30.098156       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:31.176402       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:31.176426       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:33.470946       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:33.470973       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:35.876871       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:35.876898       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:39.187500       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:39.187528       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:41.982322       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:41.982348       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:45.633773       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:45.633800       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:49.251939       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:49.251963       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:53.073889       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:53.073916       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:56.849790       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:43:56.849816       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:43:59.374307       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:59.374437       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:59.374463       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:00.098839       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:00.098995       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:00.099080       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:00.156000       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:00.156028       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:03.542343       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:03.542373       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:07.664233       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:07.664257       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:10.516789       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:10.516812       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:12.880159       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:12.880183       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:15.898108       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:15.898135       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:19.060641       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:19.060665       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:21.909633       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:21.909657       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:25.096514       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:25.096535       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:27.267236       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:27.267259       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:29.374631       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:29.374673       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:29.374702       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:29.840083       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:29.840107       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:30.099353       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:30.099379       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:30.099368       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:33.084258       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:33.084288       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:34.090203       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolume total 56 items received
I0624 21:44:35.907429       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:35.907455       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:38.571926       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:38.571951       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:41.355375       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:41.355400       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:41.366885       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolumeAttachment total 69 items received
I0624 21:44:45.154990       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:45.155014       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:48.807081       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:48.807104       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:51.346318       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:51.346344       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:54.271072       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:54.271102       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:57.496632       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:44:57.496661       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:44:59.375230       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:59.375253       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:59.375288       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:00.100055       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:00.100081       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:00.100066       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:01.224351       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:01.224383       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:04.429492       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:04.429583       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:07.533438       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:07.533466       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:09.633270       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:09.633295       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:12.310894       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:12.310921       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:14.913286       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:14.913309       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:16.965324       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:16.965350       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:19.698742       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:19.698771       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:23.232428       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:23.232454       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:27.095637       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:27.095660       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:29.375465       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:29.375495       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:29.375546       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:30.100318       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:30.100345       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:30.100335       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:31.014230       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:31.014255       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:33.604899       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:33.604928       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:36.146115       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:36.146142       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:39.744842       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:39.744874       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:43.878568       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:43.878596       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:46.986966       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:46.986994       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:50.292991       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:50.293013       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:54.613960       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:54.613984       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:57.675603       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:45:57.675630       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:45:59.377247       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:59.377267       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:45:59.377310       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:00.100517       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:00.100637       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:00.100694       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:00.993335       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:00.993361       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:04.676851       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:04.676875       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:08.157921       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:08.157943       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:10.351863       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:10.351888       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:12.662028       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:12.662053       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:14.963264       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:14.963290       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:19.075999       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:19.076028       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:21.414911       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:21.414935       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:23.654654       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:23.654681       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:27.997523       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:27.997547       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:29.377853       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:29.377927       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:29.377927       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:30.100762       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:30.100792       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:30.100823       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:32.023830       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:32.023852       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:32.374984       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 39 items received
I0624 21:46:34.983378       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:34.983401       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:38.774986       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:38.775018       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:41.330528       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:41.330557       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:44.847497       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:44.847524       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:47.981532       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:47.981558       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:50.992741       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:50.992768       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:54.448717       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:54.448743       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:56.855567       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:56.855587       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:46:59.378139       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:59.378175       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:59.378128       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:59.793509       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:46:59.793535       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:00.101882       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:00.101892       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:00.101905       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:03.117531       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:03.117554       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:05.222136       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:05.222162       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:08.153557       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:08.153582       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:11.791283       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:11.791312       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:13.938276       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:13.938301       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:16.839050       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:16.839076       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:18.942368       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:18.942397       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:23.195531       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:23.195559       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:26.415630       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:26.415653       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:29.169692       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:29.169721       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:29.378938       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:29.379004       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:29.378945       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:30.102927       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:30.102952       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:30.102989       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:32.930775       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:32.930801       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:35.673397       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:35.673419       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:39.823389       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:39.823415       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:43.373058       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolume total 24 items received
I0624 21:47:43.557406       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:43.557437       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:46.751154       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:46.751189       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:49.964774       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:49.964802       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:53.112631       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:53.112656       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:56.476074       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:56.476106       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:59.167294       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:47:59.167320       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:47:59.379553       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:59.379576       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:47:59.379563       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:00.103305       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:00.103368       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:00.103318       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:02.483681       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:02.483707       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:06.794931       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:06.794955       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:08.941258       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:08.941282       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:11.034510       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:11.034532       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:14.999215       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:14.999242       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:17.265442       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:17.265465       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:21.484313       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:21.484337       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:25.208815       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:25.208841       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:27.522379       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:27.522400       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:29.380399       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:29.380441       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:29.380415       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:30.104280       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:30.104366       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:30.104380       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:30.211843       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:30.211868       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:34.405675       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:34.405698       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:36.553663       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:36.553686       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:39.634570       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:39.634596       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:42.312040       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:42.312067       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:45.390569       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:45.390593       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:47.824887       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:47.824912       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:50.766817       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:50.766847       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:54.968548       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:54.968574       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:57.864093       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:48:57.864116       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:48:59.380595       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:59.380702       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:48:59.380716       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:00.105006       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:00.105086       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:00.105443       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:00.850113       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:00.850144       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:03.897778       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:03.897800       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:06.624052       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:06.624072       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:10.865834       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:10.865853       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:14.168100       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:14.168125       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:16.094465       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 47 items received
I0624 21:49:17.288152       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:17.288176       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:21.645567       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:21.645590       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:25.751618       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:25.751654       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:27.957765       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:27.957790       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:29.382133       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:29.382160       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:29.382149       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:30.105684       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:30.105708       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:30.105764       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:31.155804       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:31.155829       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:35.126860       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:35.126884       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:37.396849       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:37.396875       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:40.465682       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:40.465709       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:44.789406       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:44.789433       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:48.818329       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:48.818366       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:52.578930       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:52.578956       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:54.938152       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:54.938173       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:58.358495       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:49:58.358526       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:49:59.382355       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:59.382388       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:59.382424       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:00.106226       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:00.106245       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:00.106302       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:01.064721       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:01.064812       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:04.309942       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:04.309972       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:06.862914       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:06.862936       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:09.032934       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:09.032959       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:11.530398       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:11.530425       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:14.405592       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:14.405620       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:17.043067       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:17.043094       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:20.550896       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:20.550920       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:24.147085       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:24.147114       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:27.838880       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:27.838904       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:29.383303       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:29.383377       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:29.383433       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:30.107413       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:30.107428       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:30.107438       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:31.537619       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:31.537643       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:34.603575       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:34.603601       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:36.945239       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:36.945262       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:41.075268       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:41.075296       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:43.559762       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:43.559794       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:45.669113       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:45.669157       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:49.748827       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:49.748851       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:52.707960       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:52.707988       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:54.721935       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:54.721959       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:57.168128       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:50:57.168151       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:50:59.383513       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:59.383521       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:59.383546       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:00.108534       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:00.108648       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:00.108705       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:00.470513       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:00.470541       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:02.849267       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:02.849290       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:05.321697       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:05.321722       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:06.093939       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolumeAttachment total 78 items received
I0624 21:51:09.189307       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:09.189333       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:12.754397       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:12.754421       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:15.389266       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:15.389288       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:19.082363       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:19.082393       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:23.479857       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:23.479882       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:26.871686       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:26.871718       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:29.384016       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:29.384045       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:29.384062       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:30.108926       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:30.109037       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:30.109057       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:30.431799       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:30.431957       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:34.278198       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:34.278228       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:36.459912       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:36.459939       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:38.725291       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:38.725320       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:42.779833       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:42.779859       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:47.018404       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:47.018430       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:49.857163       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:49.857188       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:52.132577       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:52.132607       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:55.349583       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:55.349616       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:51:59.384203       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:59.384235       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:59.384275       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:51:59.436440       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:51:59.436466       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:00.109102       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:00.109161       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:00.109201       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:02.004448       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:02.004479       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:04.589061       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:04.589087       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:08.403451       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:08.403484       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:11.752352       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:11.752618       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:12.375214       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolumeAttachment total 90 items received
I0624 21:52:16.123894       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:16.123918       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:18.602949       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:18.602976       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:22.940804       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:22.940830       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:25.518082       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:25.518107       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:27.775819       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:27.775843       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:29.384437       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:29.384463       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:29.384504       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:30.109445       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:30.109475       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:30.109525       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:31.347626       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:31.347659       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:33.553250       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:33.553333       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:36.215795       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:36.215815       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:38.608388       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:38.608410       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:41.820272       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:41.820297       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:46.222962       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:46.222989       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:48.979686       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:48.979715       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:51.390533       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:51.390557       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:55.540495       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:55.540524       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:59.253694       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:52:59.253720       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:52:59.385127       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:59.385165       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:52:59.385183       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:00.109789       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:00.109820       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:00.109828       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:01.572100       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:01.572130       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:04.488391       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:04.488415       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:07.043956       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:07.044148       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:10.996171       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:10.996196       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:13.685166       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:13.685189       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:15.770264       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:15.770289       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:19.808200       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:19.808232       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:23.987577       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:23.987607       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:26.376477       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:26.376504       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:29.287501       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:29.287522       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:29.385756       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:29.385997       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:29.386036       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:30.109897       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:30.109943       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:30.110008       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:31.414735       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:31.414759       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:35.448959       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:35.448984       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:37.625466       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:37.625491       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:40.336458       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:40.336484       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:43.673570       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:43.673591       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:44.098743       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolume total 86 items received
I0624 21:53:46.757561       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:46.757599       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:49.635970       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:49.635995       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:52.542691       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:52.542727       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:55.542395       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:55.542419       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:53:57.385004       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 53 items received
I0624 21:53:59.386197       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:59.386205       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:59.386216       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:53:59.408411       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:53:59.408438       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:00.110078       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:00.110180       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:00.110159       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:01.422772       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:01.422800       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:04.063444       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:04.063472       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:06.233418       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:06.233453       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:09.822065       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:09.822090       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:12.771346       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:12.771369       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:17.174304       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:17.174340       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:21.377354       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:21.377382       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:23.487792       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:23.487983       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:26.210339       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:26.210375       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:29.389266       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:29.389288       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:29.389325       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:29.567945       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:29.567978       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:30.110395       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:30.110423       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:30.110413       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:31.977358       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:31.977381       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:34.642827       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:34.642851       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:37.698297       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:37.698328       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:40.886959       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:40.886985       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:45.159958       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:45.159993       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:47.546085       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:47.546108       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:51.941920       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:51.941954       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:55.152452       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:55.152475       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:58.991615       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:54:58.991641       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:54:59.390040       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:59.390049       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:54:59.390065       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:00.110997       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:00.111010       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:00.111023       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:02.269381       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:02.269407       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:05.419226       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:05.419308       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:09.467760       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:09.467789       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:11.600158       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:11.600184       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:14.307248       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:14.307273       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:17.257740       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:17.257775       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:21.575720       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:21.575745       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:24.694122       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:24.694148       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:26.890357       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:26.890388       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:29.391827       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:29.391851       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:29.391836       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:30.111263       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:30.111295       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:30.111282       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:30.484435       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:30.484465       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:32.767977       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:32.768008       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:35.457392       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:35.457415       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:38.325657       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:38.325916       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:40.386795       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolume total 85 items received
I0624 21:55:42.073392       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:42.073417       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:44.787914       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:44.788127       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:46.909635       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:46.909674       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:51.251529       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:51.251554       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:54.927833       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:54.927856       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:58.920852       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:55:58.920872       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:55:59.392671       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:59.392693       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:55:59.392732       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:00.111492       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:00.111521       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:00.111501       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:01.358243       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:01.358267       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:05.491681       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:05.491705       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:08.503676       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:08.503717       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:10.976058       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:10.976084       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:15.011349       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:15.011386       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:19.309574       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:19.309607       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:21.522851       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:21.522874       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:25.267675       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:25.267698       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:27.670713       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:27.670740       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:29.393243       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:29.393324       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:29.393350       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:30.073696       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:30.073721       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:30.111667       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:30.111733       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:30.111825       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:32.757464       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:32.757486       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:35.778461       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:35.778488       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:38.753390       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:38.753477       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:41.336957       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:41.336984       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:44.670509       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:44.670540       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:47.138681       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:47.138713       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:49.401762       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:49.401793       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:52.868301       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:52.868325       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:56.486033       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:56:56.486057       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:56:59.393446       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:59.393465       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:56:59.393455       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:00.111908       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:00.111948       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:00.113042       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:00.747516       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:00.747538       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:04.424756       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:04.424784       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:07.273331       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:07.273357       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:10.108918       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:10.108944       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:13.069865       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:13.069891       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:16.482096       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:16.482127       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:20.171861       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:20.171885       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:21.096864       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolumeAttachment total 60 items received
I0624 21:57:23.415652       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:23.415676       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:26.976613       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:26.976643       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:29.177297       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:29.177321       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:29.394073       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:29.394107       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:29.394093       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:30.112008       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:30.112064       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:30.113210       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:32.878089       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:32.878116       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:35.596351       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:35.596377       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:39.666363       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:39.666390       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:43.614103       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:43.614129       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:47.798918       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:47.798943       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:50.362454       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:50.362479       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:53.925480       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:53.925503       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:58.114953       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:57:58.114977       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:57:59.394248       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:59.394258       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:57:59.394268       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:00.113120       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:00.113167       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:00.113288       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:02.360202       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:02.360219       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:06.376948       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:06.376972       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:08.848726       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:08.848752       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:10.989462       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:10.989482       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:13.485851       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:13.485874       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:15.978602       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:15.978628       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:18.600151       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:18.600175       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:21.070954       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:21.070982       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:23.353313       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:23.353338       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:25.771029       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:25.771054       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:27.963842       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:27.963868       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:29.395207       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:29.395240       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:29.395228       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:30.114409       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:30.118016       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:30.118062       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:30.251518       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:30.251543       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:32.519490       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:32.519513       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:36.264995       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:36.265028       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:39.305175       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:39.305202       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:42.291523       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:42.291544       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:44.935458       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:44.935485       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:47.793428       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:47.793450       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:50.367568       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:50.367592       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:52.967217       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:52.967242       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:57.026453       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:58:57.026485       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:58:59.395462       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:59.395594       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:58:59.395615       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:00.114511       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:00.118805       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:00.118823       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:01.104781       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:01.104807       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:04.142306       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:04.142332       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:07.480738       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:07.480764       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:10.495814       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:10.495851       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:12.097253       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 71 items received
I0624 21:59:13.298140       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:13.298163       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:17.337046       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:17.337075       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:20.380146       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:20.380238       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:22.869544       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:22.869568       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:27.271151       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:27.271366       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:29.395686       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:29.395768       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:29.395764       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:30.114654       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:30.119940       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:30.120027       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:30.862852       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:30.862873       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:33.962922       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:33.962945       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:36.410259       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:36.410287       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:39.945306       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:39.945331       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:42.835549       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:42.835574       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:46.924705       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:46.924731       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:50.796536       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:50.796565       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:52.964482       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:52.964505       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:57.230590       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:57.230614       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 21:59:57.377361       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolumeAttachment total 62 items received
I0624 21:59:59.395912       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:59.395975       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:59.395999       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:59:59.953064       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 21:59:59.953084       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:00.115429       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:00.120768       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:00.121556       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:03.138945       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:03.138968       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:07.214671       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:07.214732       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:09.623448       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:09.623477       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:13.594405       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:13.594430       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:17.764119       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:17.764147       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:21.390277       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 44 items received
I0624 22:00:21.898097       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:21.898123       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:25.154849       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:25.154877       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:28.933613       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:28.933635       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:29.396065       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:29.396101       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:29.396115       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:30.115604       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:30.121904       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:30.121993       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:31.908202       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:31.908231       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:34.661806       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:34.661833       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:35.102499       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzVolume total 36 items received
I0624 22:00:38.546537       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:38.546562       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:42.581774       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:42.581801       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:44.656954       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:44.656980       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:46.860583       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:46.860608       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:49.570771       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:49.570797       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:51.838853       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:51.838875       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:55.170976       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:55.171001       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:58.887826       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:00:58.887852       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:00:59.396368       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:59.396417       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:00:59.396447       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:00.115841       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:00.123082       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:00.123157       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:01.304286       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:01.304488       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:05.595923       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:05.595950       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:08.528678       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:08.528702       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:11.432721       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:11.432746       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:15.800602       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:15.800626       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:19.830085       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:19.830110       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:23.653277       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:23.653307       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:26.732703       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:26.732723       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:29.396585       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:29.396622       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:29.396663       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:29.844599       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:29.844626       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:30.116946       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:30.124140       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:30.124155       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:32.358825       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:32.358860       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:34.698734       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:34.698768       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:37.701855       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:37.701879       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:41.800849       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:41.800874       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:45.511110       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:45.511134       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:49.284689       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:49.284712       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:53.209589       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:53.209611       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:55.485723       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:55.485749       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:58.738939       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:01:58.738962       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:01:59.396809       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:59.396849       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:01:59.396891       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:02:00.117224       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:02:00.124442       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:02:00.124456       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 22:02:01.370858       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:02:01.370882       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:02:04.575590       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:02:04.575615       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
I0624 22:02:07.559294       1 leaderelection.go:352] lock is held by k8s-master-11903559-0_a5bb1fb9-cb9a-43a2-a6a0-5e7d7e36e452 and has not yet expired
I0624 22:02:07.559318       1 leaderelection.go:253] failed to acquire lease kube-system/csi-azuredisk-controller
dumping logs for kube-system/csi-azuredisk-controller-6f554768d6-gt66f/azuredisk
W0624 21:34:54.602356       1 main.go:80] nodeid is empty
I0624 21:34:54.604673       1 main.go:130] set up prometheus server on [::]:29604
W0624 21:34:54.604697       1 azuredisk_v2.go:117] Using DriverV2
I0624 21:34:54.604845       1 azuredisk_v2.go:163] 
DRIVER INFORMATION:
... skipping 14 lines ...
I0624 21:34:54.605944       1 reflector.go:255] Listing and watching *v1beta2.AzVolume from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117
I0624 21:34:54.606166       1 reflector.go:219] Starting reflector *v1beta2.AzDriverNode (30s) from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117
I0624 21:34:54.606177       1 reflector.go:255] Listing and watching *v1beta2.AzDriverNode from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117
I0624 21:34:54.706232       1 shared_informer.go:285] caches populated
I0624 21:34:54.706282       1 azuredisk_v2.go:188] driver userAgent: disk.csi.azure.com/latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 gc/go1.18.3 (amd64-linux) e2e-test
I0624 21:34:54.706302       1 azure_disk_utils.go:474] reading cloud config from secret kube-system/azure-cloud-provider
I0624 21:34:54.709745       1 azure_disk_utils.go:481] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0624 21:34:54.709767       1 azure_disk_utils.go:486] could not read cloud config from secret kube-system/azure-cloud-provider
I0624 21:34:54.709776       1 azure_disk_utils.go:496] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0624 21:34:54.709928       1 azure_disk_utils.go:504] read cloud config from file: /etc/kubernetes/azure.json successfully
I0624 21:34:54.710664       1 azure_auth.go:245] Using AzurePublicCloud environment
I0624 21:34:54.710790       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0624 21:34:54.710828       1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000
... skipping 26 lines ...
I0624 21:34:54.711653       1 azure_vmasclient.go:73] Azure AvailabilitySetsClient  (write ops) using rate limit config: QPS=100, bucket=1000
I0624 21:34:54.711738       1 azure.go:1003] attach/detach disk operation rate limit QPS: 1.333333, Bucket: 80
I0624 21:34:54.711898       1 azuredisk_v2.go:214] disable UseInstanceMetadata for controller
I0624 21:34:54.711919       1 azuredisk_v2.go:230] cloud: AzurePublicCloud, location: canadacentral, rg: kubetest-ybmpahy2, VMType: standard, PrimaryScaleSetName: , PrimaryAvailabilitySetName: agentpool1-availabilitySet-11903559, DisableAvailabilitySetNodes: false
I0624 21:34:54.711926       1 skus.go:121] NewNodeInfo: Starting to populate node and disk sku information.
I0624 21:34:54.945390       1 azure_armclient.go:153] Send.sendRequest original response: {
  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
I0624 21:34:54.945412       1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host
I0624 21:34:54.945438       1 azure_vmclient.go:133] Received error in vm.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/virtualMachines/, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {

  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
E0624 21:34:54.945820       1 azure_standard.go:588] as.GetInstanceTypeByNodeName() failed: as.getVirtualMachine() err=Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {

  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
E0624 21:34:54.945852       1 azuredisk_v2.go:238] Failed to get node info. Error: NewNodeInfo: Failed to get instance type from Azure cloud provider, nodeName: , error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {

  "error": {

    "code": "BadRequest",
    "message": "The request URL is not valid."
  }
}
I0624 21:34:54.945986       1 mount_linux.go:208] Detected OS without systemd
I0624 21:34:54.945997       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
... skipping 168 lines ...
I0624 21:34:55.595001       1 common.go:603]  "msg"="Storing pod kube-scheduler-k8s-master-11903559-0 and claim [] to podToClaimsMap map." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595029       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/kube-scheduler-k8s-master-11903559-0." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-scheduler-k8s-master-11903559-0" 
I0624 21:34:55.595042       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/kube-scheduler-k8s-master-11903559-0)." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-scheduler-k8s-master-11903559-0" 
I0624 21:34:55.595055       1 pod.go:99]  "msg"="Pod kube-system/kube-scheduler-k8s-master-11903559-0 has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-scheduler-k8s-master-11903559-0" 
I0624 21:34:55.595074       1 pod.go:89]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/kube-scheduler-k8s-master-11903559-0" "latency"=52001 
I0624 21:34:55.595087       1 common.go:544]  "msg"="Adding pod csi-azuredisk-controller-6f554768d6-gt66f to shared map with keyName kube-system/csi-azuredisk-controller-6f554768d6-gt66f." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595505       1 common.go:550]  "msg"="Pod spec of pod csi-azuredisk-controller-6f554768d6-gt66f is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-6zgnp VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner-disk Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=15s --leader-election --leader-election-namespace=kube-system --worker-threads=40 --extra-create-metadata=true --strict-topology=true] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.4.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 Command:[] Args:[--v=5 --endpoint=$(CSI_ENDPOINT) --metrics-address=0.0.0.0:29604 --is-controller-plugin=true --enable-perf-optimization=true --disable-avset-nodes=false --drivername=disk.csi.azure.com --driver-object-namespace=azure-disk-csi --leader-election-namespace=kube-system --cloud-config-secret-name=azure-cloud-provider --cloud-config-secret-namespace=kube-system --custom-user-agent= --user-agent-suffix=e2e-test --allow-empty-cloud-config=false] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000853ba8 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-master-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000853bb0} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000853bb8}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000853bc0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000853bc4 PreemptionPolicy:0xc00010d570 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-6zgnp VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]" "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595571       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-gt66f: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595610       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-gt66f: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595689       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-gt66f: Skipping Volume {kube-api-access-6zgnp {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595703       1 common.go:603]  "msg"="Storing pod csi-azuredisk-controller-6f554768d6-gt66f and claim [] to podToClaimsMap map." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.595720       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/csi-azuredisk-controller-6f554768d6-gt66f." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-controller-6f554768d6-gt66f" 
I0624 21:34:55.595732       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/csi-azuredisk-controller-6f554768d6-gt66f)." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-controller-6f554768d6-gt66f" 
... skipping 37 lines ...
I0624 21:34:55.600700       1 common.go:603]  "msg"="Storing pod csi-azuredisk-scheduler-extender-9bdb8968d-j29wn and claim [] to podToClaimsMap map." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.600741       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn" 
I0624 21:34:55.600755       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn)." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn" 
I0624 21:34:55.600833       1 pod.go:99]  "msg"="Pod kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn" 
I0624 21:34:55.600893       1 pod.go:89]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-scheduler-extender-9bdb8968d-j29wn" "latency"=131202 
I0624 21:34:55.600908       1 common.go:544]  "msg"="Adding pod csi-azuredisk-controller-6f554768d6-fq92d to shared map with keyName kube-system/csi-azuredisk-controller-6f554768d6-fq92d." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.601636       1 common.go:550]  "msg"="Pod spec of pod csi-azuredisk-controller-6f554768d6-fq92d is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zmzt2 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner-disk Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=15s --leader-election --leader-election-namespace=kube-system --worker-threads=40 --extra-create-metadata=true --strict-topology=true] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.4.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 Command:[] Args:[--v=5 --endpoint=$(CSI_ENDPOINT) --metrics-address=0.0.0.0:29604 --is-controller-plugin=true --enable-perf-optimization=true --disable-avset-nodes=false --drivername=disk.csi.azure.com --driver-object-namespace=azure-disk-csi --leader-election-namespace=kube-system --cloud-config-secret-name=azure-cloud-provider --cloud-config-secret-namespace=kube-system --custom-user-agent= --user-agent-suffix=e2e-test --allow-empty-cloud-config=false] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000853c98 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000853ca0} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000853ca8}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000853cb0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000853cb4 PreemptionPolicy:0xc00010d9b0 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zmzt2 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]" "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.601837       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.601876       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.601920       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {kube-api-access-zmzt2 {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.602065       1 common.go:603]  "msg"="Storing pod csi-azuredisk-controller-6f554768d6-fq92d and claim [] to podToClaimsMap map." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" 
I0624 21:34:55.602088       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/csi-azuredisk-controller-6f554768d6-fq92d." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-controller-6f554768d6-fq92d" 
I0624 21:34:55.602104       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/csi-azuredisk-controller-6f554768d6-fq92d)." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-controller-6f554768d6-fq92d" 
... skipping 155 lines ...
I0624 21:34:55.616673       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/kube-proxy-np59h." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-proxy-np59h" 
I0624 21:34:55.616685       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/kube-proxy-np59h)." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-proxy-np59h" 
I0624 21:34:55.616699       1 pod.go:99]  "msg"="Pod kube-system/kube-proxy-np59h has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-proxy-np59h" 
I0624 21:34:55.616736       1 pod.go:89]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/kube-proxy-np59h" "latency"=44401 
I0624 21:34:55.616759       1 pod.go:150]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).Recover" "latency"=124769794 
I0624 21:34:55.616777       1 azuredisk_v2.go:407]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d3af6b7-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).StartControllersAndDieOnExit.func1" "latency"=149483849 
I0624 21:34:55.651307       1 node_availability.go:59] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="nodeavailability" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" 
I0624 21:34:55.651571       1 common.go:544]  "msg"="Adding pod kube-apiserver-k8s-master-11903559-0 to shared map with keyName kube-system/kube-apiserver-k8s-master-11903559-0."  
I0624 21:34:55.651585       1 common.go:2018]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d5714be-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).tryCreateFailedReplicas" "latency"=11400 
I0624 21:34:55.651325       1 node_availability.go:59] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="nodeavailability" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" 
I0624 21:34:55.652005       1 common.go:550]  "msg"="Pod spec of pod kube-apiserver-k8s-master-11903559-0 is: {Volumes:[{Name:etc-kubernetes VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:etc-ssl-certs VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/ssl/certs,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:var-lib-kubelet VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:msi VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/waagent/ManagedIdentity-Settings,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:sock VolumeSource:{HostPath:&HostPathVolumeSource{Path:/opt,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:auditlog VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/log/kubeaudit,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:kube-apiserver Image:mcr.microsoft.com/oss/kubernetes/kube-apiserver:v1.23.8 Command:[kube-apiserver] Args:[--advertise-address=10.240.255.5 --allow-privileged=true --anonymous-auth=false --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/log/kubeaudit/audit.log --audit-policy-file=/etc/kubernetes/addons/audit-policy.yaml --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/certs/ca.crt --cloud-config=/etc/kubernetes/azure.json --cloud-provider=azure --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,AlwaysPullImages --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/certs/ca.crt --etcd-certfile=/etc/kubernetes/certs/etcdclient.crt --etcd-keyfile=/etc/kubernetes/certs/etcdclient.key --etcd-servers=https://127.0.0.1:2379 --feature-gates= --kubelet-client-certificate=/etc/kubernetes/certs/client.crt --kubelet-client-key=/etc/kubernetes/certs/client.key --profiling=false --proxy-client-cert-file=/etc/kubernetes/certs/proxy.crt --proxy-client-key-file=/etc/kubernetes/certs/proxy.key --requestheader-allowed-names= --requestheader-client-ca-file=/etc/kubernetes/certs/proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/certs/apiserver.key --service-account-lookup=true --service-account-signing-key-file=/etc/kubernetes/certs/apiserver.key --service-cluster-ip-range=10.0.0.0/16 --storage-backend=etcd3 --tls-cert-file=/etc/kubernetes/certs/apiserver.crt --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA --tls-private-key-file=/etc/kubernetes/certs/apiserver.key --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etc-kubernetes ReadOnly:false MountPath:/etc/kubernetes SubPath: MountPropagation:<nil> SubPathExpr:} {Name:etc-ssl-certs ReadOnly:false MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil> SubPathExpr:} {Name:var-lib-kubelet ReadOnly:false MountPath:/var/lib/kubelet SubPath: MountPropagation:<nil> SubPathExpr:} {Name:msi ReadOnly:true MountPath:/var/lib/waagent/ManagedIdentity-Settings SubPath: MountPropagation:<nil> SubPathExpr:} {Name:sock ReadOnly:false MountPath:/opt SubPath: MountPropagation:<nil> SubPathExpr:} {Name:auditlog ReadOnly:false MountPath:/var/log/kubeaudit SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000b77210 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName: DeprecatedServiceAccount: AutomountServiceAccountToken:<nil> NodeName:k8s-master-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key: Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>}] HostAliases:[] PriorityClassName:system-node-critical Priority:0xc000b77218 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000b7721c PreemptionPolicy:0xc000295370 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:etc-kubernetes VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:etc-ssl-certs VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/ssl/certs,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:var-lib-kubelet VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:msi VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/waagent/ManagedIdentity-Settings,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:sock VolumeSource:{HostPath:&HostPathVolumeSource{Path:/opt,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:auditlog VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/log/kubeaudit,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:34:55.652071       1 common.go:580]  "msg"="Pod kube-apiserver-k8s-master-11903559-0: Skipping Volume {etc-kubernetes {&HostPathVolumeSource{Path:/etc/kubernetes,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.652106       1 common.go:580]  "msg"="Pod kube-apiserver-k8s-master-11903559-0: Skipping Volume {etc-ssl-certs {&HostPathVolumeSource{Path:/etc/ssl/certs,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.652147       1 common.go:580]  "msg"="Pod kube-apiserver-k8s-master-11903559-0: Skipping Volume {var-lib-kubelet {&HostPathVolumeSource{Path:/var/lib/kubelet,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.652198       1 common.go:580]  "msg"="Pod kube-apiserver-k8s-master-11903559-0: Skipping Volume {msi {&HostPathVolumeSource{Path:/var/lib/waagent/ManagedIdentity-Settings,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.652229       1 common.go:580]  "msg"="Pod kube-apiserver-k8s-master-11903559-0: Skipping Volume {sock {&HostPathVolumeSource{Path:/opt,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
... skipping 2 lines ...
I0624 21:34:55.652344       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/kube-apiserver-k8s-master-11903559-0." "disk.csi.azure.com/request-id"="7d5732af-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-apiserver-k8s-master-11903559-0" 
I0624 21:34:55.652385       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/kube-apiserver-k8s-master-11903559-0)." "disk.csi.azure.com/request-id"="7d5732af-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-apiserver-k8s-master-11903559-0" 
I0624 21:34:55.652425       1 pod.go:99]  "msg"="Pod kube-system/kube-apiserver-k8s-master-11903559-0 has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="7d5732af-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-apiserver-k8s-master-11903559-0" 
I0624 21:34:55.652456       1 pod.go:89]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d5732af-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/kube-apiserver-k8s-master-11903559-0" "latency"=121002 
I0624 21:34:55.652546       1 common.go:544]  "msg"="Adding pod azure-ip-masq-agent-qz9s4 to shared map with keyName kube-system/azure-ip-masq-agent-qz9s4."  
I0624 21:34:55.652702       1 common.go:2018]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d572637-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).tryCreateFailedReplicas" "latency"=6400 
I0624 21:34:55.651415       1 node_availability.go:59] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="nodeavailability" "disk.csi.azure.com/node-name"="k8s-master-11903559-0" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" 
I0624 21:34:55.652985       1 common.go:544]  "msg"="Adding pod azure-ip-masq-agent-xskgz to shared map with keyName kube-system/azure-ip-masq-agent-xskgz."  
I0624 21:34:55.653144       1 common.go:2018]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d574b33-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).tryCreateFailedReplicas" "latency"=6400 
I0624 21:34:55.652872       1 common.go:550]  "msg"="Pod spec of pod azure-ip-masq-agent-qz9s4 is: {Volumes:[{Name:azure-ip-masq-agent-config-volume VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:azure-ip-masq-agent-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-p9snt VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:azure-ip-masq-agent Image:mcr.microsoft.com/oss/kubernetes/ip-masq-agent:v2.5.0 Command:[] Args:[--enable-ipv6=false] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[cpu:{i:{value:50 scale:-3} d:{Dec:<nil>} s:50m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] Requests:map[cpu:{i:{value:50 scale:-3} d:{Dec:<nil>} s:50m Format:DecimalSI} memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}]} VolumeMounts:[{Name:azure-ip-masq-agent-config-volume ReadOnly:false MountPath:/etc/config SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-p9snt ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc0006bc408 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:default DeprecatedServiceAccount:default AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[k8s-agentpool1-11903559-0],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,} SchedulerName:default-scheduler Tolerations:[{Key:CriticalAddonsOnly Operator:Exists Value: Effect: TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/master Operator:Equal Value:true Effect:NoSchedule TolerationSeconds:<nil>} {Key: Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>} {Key: Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>} {Key:node.kubernetes.io/disk-pressure Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/memory-pressure Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/pid-pressure Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/unschedulable Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/network-unavailable Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>}] HostAliases:[] PriorityClassName:system-node-critical Priority:0xc0006bc410 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc0006bc414 PreemptionPolicy:0xc000810a50 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:azure-ip-masq-agent-config-volume VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:azure-ip-masq-agent-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-p9snt VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:34:55.653458       1 common.go:544]  "msg"="Adding pod csi-azuredisk-node-xq9j7 to shared map with keyName kube-system/csi-azuredisk-node-xq9j7."  
I0624 21:34:55.653607       1 common.go:580]  "msg"="Pod azure-ip-masq-agent-qz9s4: Skipping Volume {azure-ip-masq-agent-config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:azure-ip-masq-agent-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.653806       1 common.go:580]  "msg"="Pod azure-ip-masq-agent-qz9s4: Skipping Volume {kube-api-access-p9snt {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists."  
... skipping 92 lines ...
I0624 21:34:55.660152       1 common.go:603]  "msg"="Storing pod kube-controller-manager-k8s-master-11903559-0 and claim [] to podToClaimsMap map."  
I0624 21:34:55.660175       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/kube-controller-manager-k8s-master-11903559-0." "disk.csi.azure.com/request-id"="7d586500-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-controller-manager-k8s-master-11903559-0" 
I0624 21:34:55.660195       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/kube-controller-manager-k8s-master-11903559-0)." "disk.csi.azure.com/request-id"="7d586500-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-controller-manager-k8s-master-11903559-0" 
I0624 21:34:55.660216       1 pod.go:99]  "msg"="Pod kube-system/kube-controller-manager-k8s-master-11903559-0 has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="7d586500-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/kube-controller-manager-k8s-master-11903559-0" 
I0624 21:34:55.660241       1 pod.go:89]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d586500-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/kube-controller-manager-k8s-master-11903559-0" "latency"=66702 
I0624 21:34:55.660323       1 common.go:544]  "msg"="Adding pod csi-azuredisk-controller-6f554768d6-fq92d to shared map with keyName kube-system/csi-azuredisk-controller-6f554768d6-fq92d."  
I0624 21:34:55.660686       1 common.go:550]  "msg"="Pod spec of pod csi-azuredisk-controller-6f554768d6-fq92d is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zmzt2 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner-disk Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=15s --leader-election --leader-election-namespace=kube-system --worker-threads=40 --extra-create-metadata=true --strict-topology=true] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.4.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 Command:[] Args:[--v=5 --endpoint=$(CSI_ENDPOINT) --metrics-address=0.0.0.0:29604 --is-controller-plugin=true --enable-perf-optimization=true --disable-avset-nodes=false --drivername=disk.csi.azure.com --driver-object-namespace=azure-disk-csi --leader-election-namespace=kube-system --cloud-config-secret-name=azure-cloud-provider --cloud-config-secret-namespace=kube-system --custom-user-agent= --user-agent-suffix=e2e-test --allow-empty-cloud-config=false] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000bdfda8 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000bdfdb0} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000bdfdb8}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000bdfdc0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000bdfdc4 PreemptionPolicy:0xc000dbc210 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zmzt2 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:34:55.660754       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.660773       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.661294       1 common.go:550]  "msg"="Pod spec of pod kube-addon-manager-k8s-master-11903559-0 is: {Volumes:[{Name:addons VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/addons,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:msi VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/waagent/ManagedIdentity-Settings,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:var-lib-kubelet VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:etc-kubernetes VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:kube-addon-manager Image:mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.1.5 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:KUBECONFIG Value:/var/lib/kubelet/kubeconfig ValueFrom:nil} {Name:ADDON_PATH Value:/etc/kubernetes/addons ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:5 scale:-3} d:{Dec:<nil>} s:5m Format:DecimalSI} memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}]} VolumeMounts:[{Name:addons ReadOnly:true MountPath:/etc/kubernetes/addons SubPath: MountPropagation:<nil> SubPathExpr:} {Name:msi ReadOnly:true MountPath:/var/lib/waagent/ManagedIdentity-Settings SubPath: MountPropagation:<nil> SubPathExpr:} {Name:var-lib-kubelet ReadOnly:true MountPath:/var/lib/kubelet SubPath: MountPropagation:<nil> SubPathExpr:} {Name:etc-kubernetes ReadOnly:true MountPath:/etc/kubernetes SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc00096c628 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccountName: DeprecatedServiceAccount: AutomountServiceAccountToken:<nil> NodeName:k8s-master-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key: Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>}] HostAliases:[] PriorityClassName:system-node-critical Priority:0xc00096c630 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc00096c634 PreemptionPolicy:0xc000946ae0 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:addons VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/addons,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:msi VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/waagent/ManagedIdentity-Settings,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:var-lib-kubelet VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:etc-kubernetes VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes,Type:*,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:34:55.661369       1 common.go:580]  "msg"="Pod kube-addon-manager-k8s-master-11903559-0: Skipping Volume {addons {&HostPathVolumeSource{Path:/etc/kubernetes/addons,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.661430       1 common.go:580]  "msg"="Pod kube-addon-manager-k8s-master-11903559-0: Skipping Volume {msi {&HostPathVolumeSource{Path:/var/lib/waagent/ManagedIdentity-Settings,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.661470       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {kube-api-access-zmzt2 {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists."  
... skipping 14 lines ...
I0624 21:34:55.662340       1 common.go:550]  "msg"="Pod spec of pod coredns-autoscaler-cc76d9bff-w5tj9 is: {Volumes:[{Name:kube-api-access-b4mpz VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:autoscaler Image:mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5 Command:[/cluster-proportional-autoscaler --namespace=kube-system --configmap=coredns-autoscaler --target=Deployment/coredns --default-params={\"linear\":{\"coresPerReplica\":512,\"nodesPerReplica\":32,\"min\":1}} --logtostderr=true --v=2] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:20 scale:-3} d:{Dec:<nil>} s:20m Format:DecimalSI} memory:{i:{value:10485760 scale:0} d:{Dec:<nil>} s:10Mi Format:BinarySI}]} VolumeMounts:[{Name:kube-api-access-b4mpz ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000b1b6b8 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:coredns-autoscaler DeprecatedServiceAccount:coredns-autoscaler AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-11903559-0 HostNetwork:false HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator: Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:CriticalAddonsOnly Operator:Exists Value: Effect: TolerationSeconds:<nil>} {Key: Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>} {Key: Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000b1b6c0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000b1b6c4 PreemptionPolicy:0xc000b19150 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:kube-api-access-b4mpz VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:34:55.662509       1 common.go:580]  "msg"="Pod coredns-autoscaler-cc76d9bff-w5tj9: Skipping Volume {kube-api-access-b4mpz {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.662610       1 common.go:603]  "msg"="Storing pod coredns-autoscaler-cc76d9bff-w5tj9 and claim [] to podToClaimsMap map."  
I0624 21:34:55.662693       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/coredns-autoscaler-cc76d9bff-w5tj9." "disk.csi.azure.com/request-id"="7d58c602-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/coredns-autoscaler-cc76d9bff-w5tj9" 
I0624 21:34:55.662803       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/coredns-autoscaler-cc76d9bff-w5tj9)." "disk.csi.azure.com/request-id"="7d58c602-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/coredns-autoscaler-cc76d9bff-w5tj9" 
I0624 21:34:55.661763       1 common.go:603]  "msg"="Storing pod kube-addon-manager-k8s-master-11903559-0 and claim [] to podToClaimsMap map."  
I0624 21:34:55.662091       1 common.go:550]  "msg"="Pod spec of pod csi-azuredisk-controller-6f554768d6-gt66f is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-6zgnp VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner-disk Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=15s --leader-election --leader-election-namespace=kube-system --worker-threads=40 --extra-create-metadata=true --strict-topology=true] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.4.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 Command:[] Args:[--v=5 --endpoint=$(CSI_ENDPOINT) --metrics-address=0.0.0.0:29604 --is-controller-plugin=true --enable-perf-optimization=true --disable-avset-nodes=false --drivername=disk.csi.azure.com --driver-object-namespace=azure-disk-csi --leader-election-namespace=kube-system --cloud-config-secret-name=azure-cloud-provider --cloud-config-secret-namespace=kube-system --custom-user-agent= --user-agent-suffix=e2e-test --allow-empty-cloud-config=false] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-6zgnp ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000dcba88 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-master-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000dcba90} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000dcba98}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000dcbaa0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000dcbaa4 PreemptionPolicy:0xc000dbd800 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-6zgnp VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:34:55.663032       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-gt66f: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.663120       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-gt66f: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.663208       1 pod.go:99]  "msg"="Pod kube-system/coredns-autoscaler-cc76d9bff-w5tj9 has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="7d58c602-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/coredns-autoscaler-cc76d9bff-w5tj9" 
I0624 21:34:55.663218       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-gt66f: Skipping Volume {kube-api-access-6zgnp {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:34:55.663329       1 common.go:603]  "msg"="Storing pod csi-azuredisk-controller-6f554768d6-gt66f and claim [] to podToClaimsMap map."  
I0624 21:34:55.663395       1 pod.go:89]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="7d58c602-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/coredns-autoscaler-cc76d9bff-w5tj9" "latency"=584111 
... skipping 92 lines ...
I0624 21:34:55.825524       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0624 21:34:55.825687       1 utils.go:79] GRPC request: {}
I0624 21:34:55.825874       1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":13}}}]}
I0624 21:34:57.496605       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:34:59.508106       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:35:00.420165       1 common.go:544]  "msg"="Adding pod csi-azuredisk-controller-6f554768d6-fq92d to shared map with keyName kube-system/csi-azuredisk-controller-6f554768d6-fq92d."  
I0624 21:35:00.420833       1 common.go:550]  "msg"="Pod spec of pod csi-azuredisk-controller-6f554768d6-fq92d is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zmzt2 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner-disk Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=15s --leader-election --leader-election-namespace=kube-system --worker-threads=40 --extra-create-metadata=true --strict-topology=true] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.4.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-5f5939f86db107e671b4778e00fd0672597e49a8 Command:[] Args:[--v=5 --endpoint=$(CSI_ENDPOINT) --metrics-address=0.0.0.0:29604 --is-controller-plugin=true --enable-perf-optimization=true --disable-avset-nodes=false --drivername=disk.csi.azure.com --driver-object-namespace=azure-disk-csi --leader-election-namespace=kube-system --cloud-config-secret-name=azure-cloud-provider --cloud-config-secret-namespace=kube-system --custom-user-agent= --user-agent-suffix=e2e-test --allow-empty-cloud-config=false] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zmzt2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000fc08e8 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-11903559-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000fc08f0} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000fc08f8}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000fc0900 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000fc0904 PreemptionPolicy:0xc000f02130 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zmzt2 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]"  
I0624 21:35:00.421135       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:35:00.421318       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:35:00.421494       1 common.go:580]  "msg"="Pod csi-azuredisk-controller-6f554768d6-fq92d: Skipping Volume {kube-api-access-zmzt2 {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists."  
I0624 21:35:00.421641       1 common.go:603]  "msg"="Storing pod csi-azuredisk-controller-6f554768d6-fq92d and claim [] to podToClaimsMap map."  
I0624 21:35:00.421805       1 pod.go:91]  "msg"="Creating replicas for pod kube-system/csi-azuredisk-controller-6f554768d6-fq92d." "disk.csi.azure.com/request-id"="802ef58e-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-controller-6f554768d6-fq92d" 
I0624 21:35:00.421968       1 common.go:439]  "msg"="Getting requested volumes for pod (kube-system/csi-azuredisk-controller-6f554768d6-fq92d)." "disk.csi.azure.com/request-id"="802ef58e-f405-11ec-88aa-0022483e7c98" "disk.csi/azure.com/pod-name"="kube-system/csi-azuredisk-controller-6f554768d6-fq92d" 
... skipping 10 lines ...
I0624 21:35:06.693586       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:35:06.693615       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-a0a33972-1201-4df9-913f-917493004642","parameters":{"Kind":"managed","csi.storage.k8s.io/pv/name":"pvc-a0a33972-1201-4df9-913f-917493004642","csi.storage.k8s.io/pvc/name":"pvc-h5nq7","csi.storage.k8s.io/pvc/namespace":"azuredisk-8655"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}}]}
I0624 21:35:06.694223       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:06.708737       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:06.708765       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:35:06.710567       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:35:06.710732       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:35:06.718446       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=7489801 
I0624 21:35:06.718597       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:06.719446       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:35:06.719467       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:35:06.736288       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:35:06.736555       1 cloudprovisioner.go:246] begin to create disk(pvc-a0a33972-1201-4df9-913f-917493004642) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:35:06.772227       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-a0a33972-1201-4df9-913f-917493004642 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:35:07.552203       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:35:09.315339       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-a0a33972-1201-4df9-913f-917493004642 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:35:09.315418       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-a0a33972-1201-4df9-913f-917493004642) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-a0a33972-1201-4df9-913f-917493004642 kubernetes.io-created-for-pvc-name:pvc-h5nq7 kubernetes.io-created-for-pvc-namespace:azuredisk-8655]) successfully" "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:09.315474       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=2596681648 
I0624 21:35:09.337082       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:35:09.337108       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:35:09.337151       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=2628324272 
I0624 21:35:09.337182       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=2643095971 
I0624 21:35:09.337233       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.643157371 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642" result_code="succeeded"
I0624 21:35:09.337266       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"Kind":"managed","csi.storage.k8s.io/pv/name":"pvc-a0a33972-1201-4df9-913f-917493004642","csi.storage.k8s.io/pvc/name":"pvc-h5nq7","csi.storage.k8s.io/pvc/namespace":"azuredisk-8655","kind":"Managed","requestedsizegib":"10"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642"}}
I0624 21:35:09.337830       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=22317499 
I0624 21:35:09.337876       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=2619126749 
I0624 21:35:09.337906       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=2626973155 
I0624 21:35:09.558543       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:35:09.788828       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:35:09.788862       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642"}
I0624 21:35:09.801651       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:09.803063       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:09.803432       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:35:09.809427       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=5262071 
I0624 21:35:09.809590       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:09.811539       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:09.811709       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:35:09.964080       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:09.964126       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:10.966221       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:35:10.966299       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642:AttachDiskOptions{diskName: "pvc-a0a33972-1201-4df9-913f-917493004642", lun: 0}]
I0624 21:35:10.966347       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642:AttachDiskOptions{diskName: "pvc-a0a33972-1201-4df9-913f-917493004642", lun: 0}])
I0624 21:35:10.986158       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:10.986175       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:35:10.986248       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=1184329894 
I0624 21:35:10.986278       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=1184645698 
I0624 21:35:10.986319       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=1197172966 
I0624 21:35:10.986357       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.197305468 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:35:10.986370       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:35:10.993513       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=27136964 
... skipping 35 lines ...
I0624 21:35:33.458003       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642"}
I0624 21:35:33.458341       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:33.465227       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=6829283 
I0624 21:35:33.480520       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-a0a33972-1201-4df9-913f-917493004642) scheduled in 5m0s." "disk.csi.azure.com/request-id"="93e3516e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:33.481052       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:33.488416       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:33.488614       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:35:33.489679       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=9635717 
I0624 21:35:33.489721       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:33.489919       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:33.700041       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:35:34.534416       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:35:34.534480       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642:pvc-a0a33972-1201-4df9-913f-917493004642]
... skipping 10 lines ...
I0624 21:35:50.396917       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642:pvc-a0a33972-1201-4df9-913f-917493004642]) returned with <nil>
I0624 21:35:50.396972       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642:pvc-a0a33972-1201-4df9-913f-917493004642]
I0624 21:35:50.397010       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-a0a33972-1201-4df9-913f-917493004642, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642) succeeded
I0624 21:35:50.397047       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642 from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:35:50.397090       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=16907195695 
I0624 21:35:50.408357       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642-k8s-agentpool1-11903559-1-attachment)
I0624 21:35:50.408380       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:35:50.408421       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=16927171137 
I0624 21:35:50.408464       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=16927426141 
I0624 21:35:50.408514       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="93dfedb8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=16950208919 
I0624 21:35:50.408560       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.95030262 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:35:50.408578       1 utils.go:85] GRPC response: {}
I0624 21:35:50.410756       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="85c44d44-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=13625366 
... skipping 13 lines ...
I0624 21:36:01.830289       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:36:02.398253       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:36:02.398460       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642"}
I0624 21:36:02.398587       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:36:02.398666       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:36:02.408314       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:36:02.409769       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:36:02.410718       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="a51fe17e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=11860863 
I0624 21:36:02.421789       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:36:02.422019       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:36:02.431585       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=11138953 
I0624 21:36:02.431620       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:36:02.431838       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-a0a33972-1201-4df9-913f-917493004642)" "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:36:02.431872       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-a0a33972-1201-4df9-913f-917493004642)" "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:36:02.431957       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-a0a33972-1201-4df9-913f-917493004642." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" 
I0624 21:36:02.432076       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:36:02.432214       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:36:02.432199       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=523107 
I0624 21:36:03.838754       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:36:05.845305       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:36:07.736389       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642
I0624 21:36:07.736520       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=5304015396 
I0624 21:36:07.753283       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=16669833 
I0624 21:36:07.753327       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=5321674243 
I0624 21:36:07.753353       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="83ec08f8-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=5332927398 
I0624 21:36:07.754333       1 conditionwatcher.go:171] found a wait entry for object (pvc-a0a33972-1201-4df9-913f-917493004642)
I0624 21:36:07.754345       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:36:07.754377       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="a51fe17e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=5335031027 
I0624 21:36:07.754406       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a0a33972-1201-4df9-913f-917493004642" "disk.csi.azure.com/request-id"="a51fe17e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-a0a33972-1201-4df9-913f-917493004642" "latency"=5355732910 
I0624 21:36:07.754427       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642) returned with <nil>
I0624 21:36:07.754463       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.355851712 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a0a33972-1201-4df9-913f-917493004642" result_code="succeeded"
I0624 21:36:07.754475       1 utils.go:85] GRPC response: {}
I0624 21:36:07.853215       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 8 lines ...
I0624 21:36:14.803045       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:36:14.803315       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e","parameters":{"csi.storage.k8s.io/pv/name":"pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e","csi.storage.k8s.io/pvc/name":"pvc-zkvmv","csi.storage.k8s.io/pvc/namespace":"azuredisk-4268","skuname":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}}]}
I0624 21:36:14.803582       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:14.814693       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:14.814884       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:36:14.816580       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:36:14.816714       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:36:14.824142       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:36:14.824298       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:36:14.825870       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=9246930 
I0624 21:36:14.825902       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:14.840443       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:36:14.840530       1 cloudprovisioner.go:246] begin to create disk(pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:36:14.875315       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e StorageAccountType:StandardSSD_LRS Size:10
I0624 21:36:15.886840       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:36:17.298202       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e StorageAccountType:StandardSSD_LRS Size:10
I0624 21:36:17.298278       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e kubernetes.io-created-for-pvc-name:pvc-zkvmv kubernetes.io-created-for-pvc-namespace:azuredisk-4268]) successfully" "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:17.298314       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=2472352991 
I0624 21:36:17.308347       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:36:17.308591       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:36:17.308800       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=2493697386 
I0624 21:36:17.308961       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=2505420251 
I0624 21:36:17.309117       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.505491252 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" result_code="succeeded"
I0624 21:36:17.309143       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e","csi.storage.k8s.io/pvc/name":"pvc-zkvmv","csi.storage.k8s.io/pvc/namespace":"azuredisk-4268","requestedsizegib":"10","skuname":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e"}}
I0624 21:36:17.311234       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=12873876 
I0624 21:36:17.311388       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=2485448070 
... skipping 2 lines ...
I0624 21:36:17.905547       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:36:17.905571       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e"}
I0624 21:36:17.913366       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e-k8s-agentpool1-11903559-1-attachment)
I0624 21:36:17.917203       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=6231786 
I0624 21:36:17.917286       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:17.919377       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e-k8s-agentpool1-11903559-1-attachment)
I0624 21:36:17.919460       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:36:18.042093       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:18.042145       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:36:19.042961       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:36:19.043023       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e:AttachDiskOptions{diskName: "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e", lun: 0}]
I0624 21:36:19.043070       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e:AttachDiskOptions{diskName: "pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e", lun: 0}])
I0624 21:36:19.052766       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=9560431 
I0624 21:36:19.053457       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e-k8s-agentpool1-11903559-1-attachment)
I0624 21:36:19.053570       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:36:19.053671       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=1140228735 
I0624 21:36:19.053818       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=1140389737 
I0624 21:36:19.053886       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=1147960140 
I0624 21:36:19.053922       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.148104642 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:36:19.054003       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:36:19.060265       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
... skipping 140 lines ...
I0624 21:38:48.977699       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e:pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e]) returned with <nil>
I0624 21:38:48.977748       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e:pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e]
I0624 21:38:48.977773       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e) succeeded
I0624 21:38:48.977824       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:38:48.977860       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=21997416566 
I0624 21:38:48.998989       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e-k8s-agentpool1-11903559-1-attachment)
I0624 21:38:48.999079       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:38:48.999231       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=22017451330 
I0624 21:38:48.999301       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=22017595131 
I0624 21:38:48.999378       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="fb47c219-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=22055300993 
I0624 21:38:48.999455       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=22.055409295 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:38:48.999508       1 utils.go:85] GRPC response: {}
I0624 21:38:49.003652       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="ae5e180e-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=25734049 
... skipping 10 lines ...
I0624 21:38:53.302815       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:38:53.302841       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e"}
I0624 21:38:53.302973       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:53.303026       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:53.311376       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="0afdd126-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=8277580 
I0624 21:38:53.312985       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:53.313125       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:38:53.320487       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:53.320673       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:38:53.330006       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:53.330205       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:38:53.330672       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=9230489 
I0624 21:38:53.330828       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:38:53.331055       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)" "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:38:53.332267       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)" "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:38:53.332320       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" 
I0624 21:38:53.332401       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=1369414 
I0624 21:38:54.632764       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:54.632761       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:54.632900       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:54.632914       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:38:54.632780       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:54.652504       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:38:55.363154       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:55.363162       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:55.363172       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:38:56.658254       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:38:58.631447       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e
I0624 21:38:58.631591       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=5298903444 
I0624 21:38:58.647145       1 conditionwatcher.go:171] found a wait entry for object (pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e)
I0624 21:38:58.647330       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:38:58.647581       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="0afdd126-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=5326284209 
I0624 21:38:58.647778       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="0afdd126-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=5344737691 
I0624 21:38:58.647912       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e) returned with <nil>
I0624 21:38:58.648047       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.345080297 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" result_code="succeeded"
I0624 21:38:58.648165       1 utils.go:85] GRPC response: {}
I0624 21:38:58.649016       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "disk.csi.azure.com/request-id"="ac84b653-f405-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-d7a1fe23-43ac-4fa6-9a9b-722f13ff7c7e" "latency"=17344185 
... skipping 8 lines ...
I0624 21:39:00.075552       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:39:00.075689       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70","parameters":{"csi.storage.k8s.io/pv/name":"pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70","csi.storage.k8s.io/pvc/name":"pvc-hpmvt","csi.storage.k8s.io/pvc/namespace":"azuredisk-198","skuname":"Premium_LRS"},"volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":7}}]}
I0624 21:39:00.075977       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:00.086161       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:00.086186       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:00.086447       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:00.086463       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:00.092080       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:00.092096       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:00.095171       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=8388601 
I0624 21:39:00.095223       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:00.113885       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:39:00.114105       1 cloudprovisioner.go:246] begin to create disk(pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:39:00.181005       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70 StorageAccountType:Premium_LRS Size:10
I0624 21:39:00.675775       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:02.585283       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70 StorageAccountType:Premium_LRS Size:10
I0624 21:39:02.585402       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70 kubernetes.io-created-for-pvc-name:pvc-hpmvt kubernetes.io-created-for-pvc-namespace:azuredisk-198]) successfully" "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:02.585444       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=2490162797 
I0624 21:39:02.601271       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=15757789 
I0624 21:39:02.601314       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=2506058987 
I0624 21:39:02.601372       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=2514595490 
I0624 21:39:02.601439       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:02.601498       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:39:02.601547       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=2515313398 
I0624 21:39:02.601655       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=2525670423 
I0624 21:39:02.601763       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.525787824 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" result_code="succeeded"
I0624 21:39:02.601778       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70","csi.storage.k8s.io/pvc/name":"pvc-hpmvt","csi.storage.k8s.io/pvc/namespace":"azuredisk-198","requestedsizegib":"10","skuname":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70"}}
I0624 21:39:02.683079       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:03.109837       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:39:03.109951       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70"}
I0624 21:39:03.115801       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:03.129561       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=15392285 
I0624 21:39:03.129661       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:03.130645       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:03.130716       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:03.247033       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:03.247681       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:04.249019       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:39:04.249099       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70:AttachDiskOptions{diskName: "pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70", lun: 0}]
I0624 21:39:04.249351       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70:AttachDiskOptions{diskName: "pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70", lun: 0}])
I0624 21:39:04.258076       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=8543508 
I0624 21:39:04.258915       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:04.258937       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:39:04.258972       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=1142973709 
I0624 21:39:04.259000       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=1143206812 
I0624 21:39:04.259038       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=1148881381 
I0624 21:39:04.259090       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.1489674810000001 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:39:04.259419       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:39:04.283093       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="11881a23-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "latency"=7455894 
... skipping 35 lines ...
I0624 21:39:28.199782       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70"}
I0624 21:39:28.200108       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:28.208416       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=8254130 
I0624 21:39:28.213146       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:28.214454       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70) scheduled in 5m0s." "disk.csi.azure.com/request-id"="1fccdfeb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:28.215547       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:28.215567       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:28.221244       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:28.221287       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:28.222017       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=7865923 
I0624 21:39:28.222227       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:28.222312       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:28.791254       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:29.267342       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:39:29.267394       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70:pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70]
... skipping 12 lines ...
I0624 21:39:44.763809       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70 from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:44.763973       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=16541648027 
I0624 21:39:44.776337       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=12180189 
I0624 21:39:44.776620       1 attach_detach.go:319]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach.func3" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=16554335520 
I0624 21:39:44.776804       1 workflow.go:149]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=16562652349 
I0624 21:39:44.777562       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70-k8s-agentpool1-11903559-1-attachment)
I0624 21:39:44.777579       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:39:44.777626       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=16564417569 
I0624 21:39:44.777933       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="10d643e2-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=16564749972 
I0624 21:39:44.778035       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="1fcaaed0-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=16577954678 
I0624 21:39:44.778130       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.57811208 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:39:44.778208       1 utils.go:85] GRPC response: {}
I0624 21:39:44.858854       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 3 lines ...
I0624 21:39:52.894731       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:53.713532       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:39:53.713600       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70"}
I0624 21:39:53.713873       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:53.714017       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:53.725479       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:53.725501       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:53.725820       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="2effcd53-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=11740134 
I0624 21:39:53.734347       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:53.734534       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:53.741350       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:53.741525       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:53.743895       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=11564031 
I0624 21:39:53.743932       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:53.744026       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)" "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:53.744164       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)" "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:53.744216       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" 
I0624 21:39:53.744254       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=270403 
I0624 21:39:54.635055       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:54.635124       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:54.635132       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:39:54.635139       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:54.635147       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:54.902382       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:55.364515       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:55.364534       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:55.364543       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:39:56.909320       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:58.917869       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:39:59.030647       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70
I0624 21:39:59.030716       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=5286402769 
I0624 21:39:59.047425       1 conditionwatcher.go:171] found a wait entry for object (pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70)
I0624 21:39:59.047447       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:39:59.047485       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="2effcd53-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=5308153417 
I0624 21:39:59.047967       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=17213296 
I0624 21:39:59.048104       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=5304061270 
I0624 21:39:59.048216       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="0f07483e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=5315898504 
I0624 21:39:59.048336       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "disk.csi.azure.com/request-id"="2effcd53-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70" "latency"=5333493904 
I0624 21:39:59.048454       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-19ef42cc-e6f7-4d12-9e70-b9319acb7b70) returned with <nil>
... skipping 9 lines ...
I0624 21:40:03.993046       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:40:03.993247       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-85e7ca04-47e3-4a07-a750-18643e916680","parameters":{"csi.storage.k8s.io/pv/name":"pvc-85e7ca04-47e3-4a07-a750-18643e916680","csi.storage.k8s.io/pvc/name":"pvc-zvnt4","csi.storage.k8s.io/pvc/namespace":"azuredisk-4115","skuname":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:40:03.993655       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:04.012949       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:04.013092       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:40:04.015811       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:40:04.015910       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:40:04.016589       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=9344108 
I0624 21:40:04.016702       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:04.035633       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:40:04.035764       1 cloudprovisioner.go:246] begin to create disk(pvc-85e7ca04-47e3-4a07-a750-18643e916680) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:40:04.082162       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-85e7ca04-47e3-4a07-a750-18643e916680 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:40:04.943555       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:05.369807       1 reflector.go:536] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: Watch close - *v1beta2.AzDriverNode total 36 items received
I0624 21:40:06.499526       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-85e7ca04-47e3-4a07-a750-18643e916680 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:40:06.499638       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-85e7ca04-47e3-4a07-a750-18643e916680) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-85e7ca04-47e3-4a07-a750-18643e916680 kubernetes.io-created-for-pvc-name:pvc-zvnt4 kubernetes.io-created-for-pvc-namespace:azuredisk-4115]) successfully" "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:06.499725       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=2482849981 
I0624 21:40:06.510102       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:40:06.510338       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:40:06.510499       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=10687024 
I0624 21:40:06.510756       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=2493941510 
I0624 21:40:06.510879       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=2503627223 
I0624 21:40:06.510586       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=2497328049 
I0624 21:40:06.511108       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=2517531083 
I0624 21:40:06.511318       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.517829887 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680" result_code="succeeded"
I0624 21:40:06.511480       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-85e7ca04-47e3-4a07-a750-18643e916680","csi.storage.k8s.io/pvc/name":"pvc-zvnt4","csi.storage.k8s.io/pvc/namespace":"azuredisk-4115","requestedsizegib":"10","skuname":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680"}}
I0624 21:40:06.952412       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:07.066368       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:40:07.066393       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680"}
I0624 21:40:07.071973       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:40:07.078469       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=6308073 
I0624 21:40:07.079434       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:40:07.079633       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:40:07.079584       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:07.189436       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:07.189481       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:08.190474       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:40:08.190600       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680:AttachDiskOptions{diskName: "pvc-85e7ca04-47e3-4a07-a750-18643e916680", lun: 0}]
I0624 21:40:08.190817       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680:AttachDiskOptions{diskName: "pvc-85e7ca04-47e3-4a07-a750-18643e916680", lun: 0}])
I0624 21:40:08.200978       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=10140163 
I0624 21:40:08.202174       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:40:08.202196       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:40:08.202361       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=1130148384 
I0624 21:40:08.202461       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=1130492190 
I0624 21:40:08.202644       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=1135817051 
I0624 21:40:08.202727       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.136047055 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:40:08.202747       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:40:08.213195       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="37a32c06-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "latency"=6735809 
... skipping 44 lines ...
I0624 21:40:39.145736       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-85e7ca04-47e3-4a07-a750-18643e916680) scheduled in 5m0s." "disk.csi.azure.com/request-id"="4a1420ec-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:39.150094       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:40:39.156070       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=10726131 
I0624 21:40:39.156105       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:39.156170       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:40:39.157243       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:40:39.157263       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:40:40.249005       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:40:40.249057       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680:pvc-85e7ca04-47e3-4a07-a750-18643e916680]
I0624 21:40:40.249127       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-85e7ca04-47e3-4a07-a750-18643e916680 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680
I0624 21:40:40.249152       1 azure_controller_standard.go:184] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(k8s-agentpool1-11903559-1)%!(EXTRA map[string]string=map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680:pvc-85e7ca04-47e3-4a07-a750-18643e916680])
I0624 21:40:41.092286       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:43.099522       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 3 lines ...
I0624 21:40:51.136998       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:53.145259       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:54.638472       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:54.638504       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:54.638545       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:54.638569       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:40:54.638576       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:40:55.154105       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:55.366924       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:55.366922       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:55.366948       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:40:57.161642       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:40:59.174361       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:00.877887       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680:pvc-85e7ca04-47e3-4a07-a750-18643e916680]) returned with <nil>
I0624 21:41:00.877931       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680:pvc-85e7ca04-47e3-4a07-a750-18643e916680]
I0624 21:41:00.877966       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-85e7ca04-47e3-4a07-a750-18643e916680, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680) succeeded
I0624 21:41:00.878001       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680 from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:41:00.878059       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=21721905576 
I0624 21:41:00.894781       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:00.896071       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:00.896249       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=21745984271 
I0624 21:41:00.896395       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=21746326375 
I0624 21:41:00.896440       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="4a104070-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=21776129040 
I0624 21:41:00.896566       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=21.776262541 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:41:00.896697       1 utils.go:85] GRPC response: {}
I0624 21:41:00.895270       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="36f546fb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=17135811 
... skipping 2 lines ...
I0624 21:41:01.180973       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:01.732141       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:41:01.732229       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680"}
I0624 21:41:01.732334       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:41:01.732437       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:41:01.743854       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:41:01.743879       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:01.746500       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="578a9656-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=13981571 
I0624 21:41:01.760291       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:41:01.760490       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:01.764730       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:41:01.765074       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:01.766629       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=10688132 
I0624 21:41:01.766666       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:41:01.766725       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-85e7ca04-47e3-4a07-a750-18643e916680)" "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:41:01.766800       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-85e7ca04-47e3-4a07-a750-18643e916680)" "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:41:01.766936       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-85e7ca04-47e3-4a07-a750-18643e916680." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:41:01.767067       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=261103 
I0624 21:41:03.188812       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:05.201622       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:07.208655       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:07.322744       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680
I0624 21:41:07.323041       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=5555672638 
I0624 21:41:07.335407       1 conditionwatcher.go:171] found a wait entry for object (pvc-85e7ca04-47e3-4a07-a750-18643e916680)
I0624 21:41:07.335426       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:07.337039       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="578a9656-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=5574964977 
I0624 21:41:07.337092       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="578a9656-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=5604652142 
I0624 21:41:07.337109       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680) returned with <nil>
I0624 21:41:07.337139       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.604787644 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-85e7ca04-47e3-4a07-a750-18643e916680" result_code="succeeded"
I0624 21:41:07.337153       1 utils.go:85] GRPC response: {}
I0624 21:41:07.338057       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "disk.csi.azure.com/request-id"="35205476-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=14960387 
... skipping 9 lines ...
I0624 21:41:11.930819       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:41:11.930870       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63","parameters":{"csi.storage.k8s.io/pv/name":"pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63","csi.storage.k8s.io/pvc/name":"pvc-v6fkn","csi.storage.k8s.io/pvc/namespace":"azuredisk-4577","skuname":"Premium_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:41:11.931360       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:11.940510       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:11.940530       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:41:11.941601       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:41:11.941616       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:11.952207       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:41:11.952366       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:11.952981       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=10392631 
I0624 21:41:11.953012       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:11.970105       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:41:11.970213       1 cloudprovisioner.go:246] begin to create disk(pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:41:12.016372       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63 StorageAccountType:Premium_LRS Size:10
I0624 21:41:13.246081       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:14.399090       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63 StorageAccountType:Premium_LRS Size:10
I0624 21:41:14.399198       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63 kubernetes.io-created-for-pvc-name:pvc-v6fkn kubernetes.io-created-for-pvc-namespace:azuredisk-4577]) successfully" "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:14.399231       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=2446154643 
I0624 21:41:14.416391       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:41:14.416408       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:14.416476       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=2475863517 
I0624 21:41:14.416505       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=2485180434 
I0624 21:41:14.416554       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.485260835 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" result_code="succeeded"
I0624 21:41:14.416569       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63","csi.storage.k8s.io/pvc/name":"pvc-v6fkn","csi.storage.k8s.io/pvc/namespace":"azuredisk-4577","requestedsizegib":"10","skuname":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63"}}
I0624 21:41:14.417929       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=18656534 
I0624 21:41:14.417977       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=2464911379 
I0624 21:41:14.418001       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=2475429111 
I0624 21:41:15.038713       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:41:15.038847       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63"}
I0624 21:41:15.042615       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:15.043965       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:15.044126       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:15.050195       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:15.050215       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:15.051361       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=6794185 
I0624 21:41:15.051416       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:15.149624       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:15.149682       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:41:15.252766       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:16.150520       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:41:16.150773       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63:AttachDiskOptions{diskName: "pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63", lun: 0}]
I0624 21:41:16.150832       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63:AttachDiskOptions{diskName: "pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63", lun: 0}])
I0624 21:41:16.158190       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:16.158230       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:16.159049       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=1115549520 
I0624 21:41:16.159094       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=1116522032 
I0624 21:41:16.159118       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=1120062877 
I0624 21:41:16.159155       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.120130078 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:41:16.159166       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:41:16.163233       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=12127853 
... skipping 47 lines ...
I0624 21:41:32.108838       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:41:32.109196       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c","csi.storage.k8s.io/pvc/name":"pvc-sb7g7","csi.storage.k8s.io/pvc/namespace":"azuredisk-4577","skuname":"Premium_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:41:32.109802       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:32.132088       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:32.132111       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:41:32.137811       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:41:32.137829       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:32.138737       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=9784926 
I0624 21:41:32.138759       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:32.157382       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:41:32.157490       1 cloudprovisioner.go:246] begin to create disk(pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:41:32.218765       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c StorageAccountType:Premium_LRS Size:10
I0624 21:41:33.335924       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:34.482866       1 reflector.go:536] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:262: Watch close - *v1.Node total 32 items received
I0624 21:41:34.653529       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c StorageAccountType:Premium_LRS Size:10
I0624 21:41:34.653789       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c kubernetes.io-created-for-pvc-name:pvc-sb7g7 kubernetes.io-created-for-pvc-namespace:azuredisk-4577]) successfully" "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:34.653874       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=2515016488 
I0624 21:41:34.662756       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:41:34.662778       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:34.662848       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=2530664191 
I0624 21:41:34.662987       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=2553197682 
I0624 21:41:34.663073       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.553339384 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" result_code="succeeded"
I0624 21:41:34.663149       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c","csi.storage.k8s.io/pvc/name":"pvc-sb7g7","csi.storage.k8s.io/pvc/namespace":"azuredisk-4577","requestedsizegib":"10","skuname":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c"}}
I0624 21:41:34.666081       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=12102457 
I0624 21:41:34.666327       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=2527527150 
I0624 21:41:34.666483       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=2537539780 
I0624 21:41:35.204604       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:41:35.204651       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-0","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c"}
I0624 21:41:35.219980       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment)
I0624 21:41:35.228605       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment)
I0624 21:41:35.228629       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:35.231572       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=13062768 
I0624 21:41:35.231610       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:35.344925       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:35.449707       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c\" to node \"k8s-agentpool1-11903559-0\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:35.449754       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c\" to node \"k8s-agentpool1-11903559-0\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:41:36.450117       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-0" delay="1s"
I0624 21:41:36.450236       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-0: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c:AttachDiskOptions{diskName: "pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c", lun: 0}]
I0624 21:41:36.450325       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-0) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c:AttachDiskOptions{diskName: "pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c", lun: 0}])
I0624 21:41:36.459539       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment)
I0624 21:41:36.459560       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:36.459611       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=1239247077 
I0624 21:41:36.459735       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=1239690083 
I0624 21:41:36.459802       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=1254858579 
I0624 21:41:36.459834       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.254996281 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" node="k8s-agentpool1-11903559-0" result_code="succeeded"
I0624 21:41:36.459847       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:41:36.461700       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=11217939 
... skipping 47 lines ...
I0624 21:41:54.299833       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:41:54.299858       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31","parameters":{"csi.storage.k8s.io/pv/name":"pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31","csi.storage.k8s.io/pvc/name":"pvc-92k9j","csi.storage.k8s.io/pvc/namespace":"azuredisk-4577","skuname":"Premium_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:41:54.300049       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:54.319978       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:54.320006       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:41:54.321843       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:41:54.321861       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:54.329737       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=5252565 
I0624 21:41:54.329901       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:54.331622       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:41:54.332457       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:54.347812       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:41:54.348105       1 cloudprovisioner.go:246] begin to create disk(pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:41:54.407109       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 StorageAccountType:Premium_LRS Size:10
I0624 21:41:54.639409       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:54.639451       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:54.639498       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:54.639502       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:41:54.639509       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:55.368386       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:55.368429       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:55.368398       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:41:55.431298       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:56.826416       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 StorageAccountType:Premium_LRS Size:10
I0624 21:41:56.826515       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 kubernetes.io-created-for-pvc-name:pvc-92k9j kubernetes.io-created-for-pvc-namespace:azuredisk-4577]) successfully" "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:56.826550       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=2496483202 
I0624 21:41:56.838859       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:41:56.838883       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:56.839528       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=12935661 
I0624 21:41:56.839572       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=2509527264 
I0624 21:41:56.839716       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=2518838680 
I0624 21:41:56.839835       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=2539729540 
I0624 21:41:56.839975       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.539962343 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" result_code="succeeded"
I0624 21:41:56.840001       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31","csi.storage.k8s.io/pvc/name":"pvc-92k9j","csi.storage.k8s.io/pvc/namespace":"azuredisk-4577","requestedsizegib":"10","skuname":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31"}}
I0624 21:41:56.840435       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=2515147633 
I0624 21:41:57.362995       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:41:57.363149       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31"}
I0624 21:41:57.370311       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:57.372045       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:57.372062       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:57.378685       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=5716871 
I0624 21:41:57.379335       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:57.380246       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:57.380699       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:41:57.437136       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:41:57.468191       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:57.468250       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:41:58.469263       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:41:58.469359       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31:AttachDiskOptions{diskName: "pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31", lun: 1}]
I0624 21:41:58.469464       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31:AttachDiskOptions{diskName: "pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31", lun: 1}])
I0624 21:41:58.477154       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=7134189 
I0624 21:41:58.477984       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:41:58.478114       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:41:58.478255       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=1107868302 
I0624 21:41:58.478801       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=1108487209 
I0624 21:41:58.478857       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=1115432396 
I0624 21:41:58.478930       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.115557798 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:41:58.478948       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"1"}}
I0624 21:41:58.485363       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
... skipping 71 lines ...
I0624 21:42:52.760452       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0624 21:42:52.760478       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31"}
I0624 21:42:52.760781       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:42:52.768133       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=7238395 
I0624 21:42:52.773972       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:42:52.776577       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:42:52.776599       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:42:52.776682       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31) scheduled in 5m0s." "disk.csi.azure.com/request-id"="99ba97bc-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:42:52.784108       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:42:52.784294       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:42:52.785389       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=9903430 
I0624 21:42:52.785536       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:42:52.785763       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:42:53.681548       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:42:53.901947       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:42:53.902053       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31]
I0624 21:42:53.902171       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31
I0624 21:42:53.902184       1 azure_controller_standard.go:184] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(k8s-agentpool1-11903559-1)%!(EXTRA map[string]string=map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31])
I0624 21:42:54.640939       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:54.640986       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:54.640940       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:54.641128       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:42:54.641136       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:42:55.369810       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:55.369814       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:55.369826       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:42:55.693942       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:42:57.703590       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:42:59.710854       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 8 lines ...
I0624 21:43:14.479296       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31]) returned with <nil>
I0624 21:43:14.479336       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31:pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31]
I0624 21:43:14.479369       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31) succeeded
I0624 21:43:14.479410       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31 from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:43:14.479447       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=21693801667 
I0624 21:43:14.487964       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31-k8s-agentpool1-11903559-1-attachment)
I0624 21:43:14.487986       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:43:14.488299       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=21713807028 
I0624 21:43:14.488471       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=21714397836 
I0624 21:43:14.488598       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="99b828ca-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=21727848812 
I0624 21:43:14.488734       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=21.728091815 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:43:14.488845       1 utils.go:85] GRPC response: {}
I0624 21:43:14.492095       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="78b3337a-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=12599862 
... skipping 2 lines ...
I0624 21:43:15.796029       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:43:16.551236       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:43:16.551497       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31"}
I0624 21:43:16.551599       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:43:16.551679       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:43:16.563476       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:43:16.565339       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:43:16.565293       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="a7e65eb3-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=13541774 
I0624 21:43:16.573666       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:43:16.574740       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:43:16.584144       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=9338220 
I0624 21:43:16.584187       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:43:16.584707       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)" "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:43:16.584800       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)" "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:43:16.584897       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" 
I0624 21:43:16.585121       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=431006 
I0624 21:43:16.586459       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:43:16.586479       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:43:17.482169       1 reflector.go:536] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:262: Watch close - *v1beta2.AzVolumeAttachment total 64 items received
I0624 21:43:17.804173       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:43:19.809931       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:43:21.816182       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:43:21.857013       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31
I0624 21:43:21.857323       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=5271927690 
I0624 21:43:21.868734       1 conditionwatcher.go:171] found a wait entry for object (pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31)
I0624 21:43:21.869161       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:43:21.869980       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="a7e65eb3-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=5295394506 
I0624 21:43:21.870909       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="a7e65eb3-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=5319187084 
I0624 21:43:21.871067       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31) returned with <nil>
I0624 21:43:21.871224       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.319606278 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" result_code="succeeded"
I0624 21:43:21.872052       1 utils.go:85] GRPC response: {}
I0624 21:43:21.872008       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "disk.csi.azure.com/request-id"="76dfc3ff-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-b0bf5d49-5008-40cd-80c8-39cd66f10c31" "latency"=14602348 
... skipping 36 lines ...
I0624 21:43:54.788157       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-0","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c"}
I0624 21:43:54.788546       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:43:54.793822       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=5218590 
I0624 21:43:54.801188       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c) scheduled in 5m0s." "disk.csi.azure.com/request-id"="beb2c66d-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:43:54.803482       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment)
I0624 21:43:54.815080       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment)
I0624 21:43:54.815255       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:43:54.815805       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=15068760 
I0624 21:43:54.815860       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:43:54.815928       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c from node k8s-agentpool1-11903559-0" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:43:55.371225       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:55.371281       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:43:55.371292       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
... skipping 16 lines ...
I0624 21:44:11.634589       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c from node k8s-agentpool1-11903559-0 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:44:11.634657       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16818716755 
I0624 21:44:11.649697       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=14964194 
I0624 21:44:11.649791       1 attach_detach.go:319]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach.func3" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16833888051 
I0624 21:44:11.649833       1 workflow.go:149]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16849107214 
I0624 21:44:11.650745       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c-k8s-agentpool1-11903559-0-attachment)
I0624 21:44:11.650785       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:44:11.650927       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16847279179 
I0624 21:44:11.651071       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="6b7e14f7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16847535682 
I0624 21:44:11.651104       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="beb0d996-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16862590841 
I0624 21:44:11.651255       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.862928047 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" node="k8s-agentpool1-11903559-0" result_code="succeeded"
I0624 21:44:11.651273       1 utils.go:85] GRPC response: {}
I0624 21:44:11.663078       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
... skipping 17 lines ...
I0624 21:44:25.371913       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:44:25.630330       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:44:25.630366       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c"}
I0624 21:44:25.630437       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:44:25.630511       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:44:25.650514       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:44:25.650547       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:44:25.651790       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="d112f6f6-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=21231363 
I0624 21:44:25.660628       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:44:25.660700       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:44:25.670229       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:44:25.670434       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:44:25.671307       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=10932935 
I0624 21:44:25.671344       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:44:25.671509       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)" "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:44:25.671547       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)" "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:44:25.671692       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" 
I0624 21:44:25.671877       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=364505 
... skipping 5 lines ...
I0624 21:44:30.936764       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c
I0624 21:44:30.937052       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=5264777722 
I0624 21:44:30.953773       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=16657106 
I0624 21:44:30.954094       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=5282610942 
I0624 21:44:30.954368       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="69a5cd41-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=5293926282 
I0624 21:44:30.955373       1 conditionwatcher.go:171] found a wait entry for object (pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c)
I0624 21:44:30.956097       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:44:30.956189       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="d112f6f6-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=5297936932 
I0624 21:44:30.956220       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "disk.csi.azure.com/request-id"="d112f6f6-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" "latency"=5325714375 
I0624 21:44:30.956229       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c) returned with <nil>
I0624 21:44:30.956282       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.325802776 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-33e32aba-2aeb-45de-833a-d3e4b7a96d4c" result_code="succeeded"
I0624 21:44:30.956296       1 utils.go:85] GRPC response: {}
I0624 21:44:32.106233       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 29 lines ...
I0624 21:45:05.626583       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/requested-role=Replica,disk.csi.azure.com/volume-name=pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63." "disk.csi.azure.com/request-id"="e8e9d281-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:05.626749       1 common.go:1681]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="e8e9d281-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=506406 
I0624 21:45:05.638168       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=16401500 
I0624 21:45:05.646022       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63) scheduled in 5m0s." "disk.csi.azure.com/request-id"="e8ecd7d7-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:05.646278       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:05.650782       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:05.650820       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:05.653483       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=7943897 
I0624 21:45:05.653543       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:05.653735       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:06.263605       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:45:06.733791       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:45:06.733939       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63:pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63]
... skipping 9 lines ...
I0624 21:45:22.224158       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63:pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63]) returned with <nil>
I0624 21:45:22.224197       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63:pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63]
I0624 21:45:22.224433       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63) succeeded
I0624 21:45:22.224562       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63 from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:22.224635       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=16570892352 
I0624 21:45:22.236343       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:22.236365       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:45:22.236500       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=16590055586 
I0624 21:45:22.236542       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=16590298589 
I0624 21:45:22.236571       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="e8e91d5e-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=16615015191 
I0624 21:45:22.236625       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.615161292 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:45:22.236659       1 utils.go:85] GRPC response: {}
I0624 21:45:22.239135       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="5f79054c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=14451878 
... skipping 21 lines ...
I0624 21:45:36.394966       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:45:36.635852       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:45:36.635873       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63"}
I0624 21:45:36.636186       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:45:36.636327       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:45:36.647080       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:45:36.647237       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:36.648397       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="fb65983c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=11979362 
I0624 21:45:36.654336       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:45:36.654535       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:36.662922       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=6879994 
I0624 21:45:36.663099       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:36.663248       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)" "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:36.663393       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)" "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:36.663452       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" 
I0624 21:45:36.663496       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=249104 
I0624 21:45:36.665592       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:45:36.665609       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:38.405881       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:45:39.146517       1 replica.go:169]  "msg"="Checking for replicas to be created after garbage collection." "disk.csi.azure.com/request-id"="4a1420ec-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" 
I0624 21:45:39.146734       1 common.go:2018]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="4a1420ec-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).tryCreateFailedReplicas" "disk.csi.azure.com/volume-name"="pvc-85e7ca04-47e3-4a07-a750-18643e916680" "latency"=10601 
I0624 21:45:40.413251       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:45:41.955029       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63
I0624 21:45:41.955319       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=5291492862 
I0624 21:45:41.979050       1 conditionwatcher.go:171] found a wait entry for object (pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63)
I0624 21:45:41.979587       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:45:41.979577       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=24193728 
I0624 21:45:41.979939       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="5d9ed0bb-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=5316751805 
I0624 21:45:41.979789       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="fb65983c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=5324913115 
I0624 21:45:41.980141       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "disk.csi.azure.com/request-id"="fb65983c-f406-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" "latency"=5343797372 
I0624 21:45:41.980159       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63) returned with <nil>
I0624 21:45:41.980211       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.343988775 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8e3f00e8-d4b9-4964-b65b-77e365709f63" result_code="succeeded"
... skipping 9 lines ...
I0624 21:45:45.774045       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:45:45.774089       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f","csi.storage.k8s.io/pvc/name":"pvc-hnplq","csi.storage.k8s.io/pvc/namespace":"azuredisk-1089","skuname":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:45:45.774559       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:45.782684       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:45.782842       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:45:45.783844       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:45:45.784006       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:45.791375       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=6274797 
I0624 21:45:45.791490       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:45.792411       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:45:45.792510       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:45.808544       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:45:45.808837       1 cloudprovisioner.go:246] begin to create disk(pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:45:45.895104       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f StorageAccountType:StandardSSD_LRS Size:10
I0624 21:45:46.435193       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:45:48.443270       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:45:48.665642       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f StorageAccountType:StandardSSD_LRS Size:10
I0624 21:45:48.665713       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f kubernetes.io-created-for-pvc-name:pvc-hnplq kubernetes.io-created-for-pvc-namespace:azuredisk-1089]) successfully" "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:48.665745       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=2874067149 
I0624 21:45:48.678319       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:45:48.678339       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:45:48.678379       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=2895388780 
I0624 21:45:48.678449       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=2903894310 
I0624 21:45:48.678535       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.903997912 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" result_code="succeeded"
I0624 21:45:48.678548       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f","csi.storage.k8s.io/pvc/name":"pvc-hnplq","csi.storage.k8s.io/pvc/namespace":"azuredisk-1089","requestedsizegib":"10","skuname":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f"}}
I0624 21:45:48.679409       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=13563710 
I0624 21:45:48.679451       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=2887812163 
I0624 21:45:48.679472       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=2894387764 
I0624 21:45:48.806520       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:45:48.806558       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f"}
I0624 21:45:48.812422       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:48.814430       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:48.814449       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:48.820015       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=7280912 
I0624 21:45:48.820051       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:48.820260       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:48.820279       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:45:48.919191       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:48.919238       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:45:49.919506       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:45:49.919606       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:AttachDiskOptions{diskName: "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f", lun: 0}]
I0624 21:45:49.919948       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:AttachDiskOptions{diskName: "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f", lun: 0}])
I0624 21:45:49.930081       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=10205758 
I0624 21:45:49.930317       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:45:49.930332       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:45:49.930483       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=1117851088 
I0624 21:45:49.930587       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=1118152293 
I0624 21:45:49.930726       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=1123899782 
I0624 21:45:49.930768       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.123985483 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:45:49.930786       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:45:49.944460       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="0352e5b7-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "latency"=8881138 
... skipping 78 lines ...
I0624 21:46:47.034419       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:46:47.035035       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) scheduled in 5m0s." "disk.csi.azure.com/request-id"="255b9482-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:46:47.040205       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5187369 
I0624 21:46:47.040279       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:46:47.040613       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:46:47.041574       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:46:47.041611       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:46:48.153350       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:46:48.153431       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f]
I0624 21:46:48.153985       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f
I0624 21:46:48.154012       1 azure_controller_standard.go:184] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(k8s-agentpool1-11903559-1)%!(EXTRA map[string]string=map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f])
I0624 21:46:48.694539       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:46:50.701215       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:46:52.708978       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:46:54.648019       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:54.648087       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:54.648181       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:54.648235       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:46:54.648244       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:46:54.716338       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:46:55.375579       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:55.375581       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:55.375595       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:46:56.722228       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:46:58.734220       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 4 lines ...
I0624 21:47:08.712483       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f]) returned with <nil>
I0624 21:47:08.712555       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f]
I0624 21:47:08.712853       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) succeeded
I0624 21:47:08.712952       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:08.713026       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=21672667118 
I0624 21:47:08.724523       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-1-attachment)
I0624 21:47:08.724537       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:47:08.724594       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=21690088144 
I0624 21:47:08.724626       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=21690222646 
I0624 21:47:08.724659       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="2558bc4d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=21708282585 
I0624 21:47:08.724687       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=21.708359587 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:47:08.724699       1 utils.go:85] GRPC response: {}
I0624 21:47:08.725199       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="02a6a994-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=12121556 
... skipping 4 lines ...
I0624 21:47:10.474205       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-0","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f"}
I0624 21:47:10.480354       1 replica.go:156]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="255b9482-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileReplica).triggerGarbageCollection" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=23445325565 
I0624 21:47:10.481775       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:10.487026       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=6663686 
I0624 21:47:10.487061       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:10.489129       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:10.489152       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:47:10.489160       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:10.489165       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:47:10.584630       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f\" to node \"k8s-agentpool1-11903559-0\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:10.584675       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f\" to node \"k8s-agentpool1-11903559-0\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:10.777731       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:47:11.585294       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-0" delay="1s"
I0624 21:47:11.585357       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-0: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:AttachDiskOptions{diskName: "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f", lun: 0}]
I0624 21:47:11.585408       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-0) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:AttachDiskOptions{diskName: "pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f", lun: 0}])
I0624 21:47:11.600566       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=14807390 
I0624 21:47:11.603458       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:11.603605       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:47:11.603920       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=1122059460 
I0624 21:47:11.604043       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=1122199662 
I0624 21:47:11.604154       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=1129600858 
I0624 21:47:11.604222       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.12975576 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" node="k8s-agentpool1-11903559-0" result_code="succeeded"
I0624 21:47:11.604303       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:47:11.616426       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="34012cfa-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "latency"=8045704 
... skipping 69 lines ...
I0624 21:47:59.424082       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-0","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f"}
I0624 21:47:59.424376       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:59.430839       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=6374782 
I0624 21:47:59.440870       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:59.441388       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) scheduled in 5m0s." "disk.csi.azure.com/request-id"="5083e98a-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:59.442836       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:59.442932       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:47:59.447140       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5542671 
I0624 21:47:59.447348       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:59.447559       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f from node k8s-agentpool1-11903559-0" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:47:59.449523       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:47:59.449542       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:00.587959       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-0" delay="1s"
I0624 21:48:00.588135       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-0: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f]
I0624 21:48:00.588237       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f
I0624 21:48:00.588274       1 azure_controller_standard.go:184] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-0) - detach disk list(k8s-agentpool1-11903559-0)%!(EXTRA map[string]string=map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f])
I0624 21:48:01.030128       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:03.037926       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 7 lines ...
I0624 21:48:16.115720       1 azure_controller_standard.go:201] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-0) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f]) returned with <nil>
I0624 21:48:16.115790       1 azure_controller_common.go:417] azuredisk - successfully detached disks from node k8s-agentpool1-11903559-0: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f:pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f]
I0624 21:48:16.115845       1 azure_controller_common.go:378] azureDisk - detach disk(pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) succeeded
I0624 21:48:16.115932       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f from node k8s-agentpool1-11903559-0 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:48:16.116180       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=16668558337 
I0624 21:48:16.124656       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f-k8s-agentpool1-11903559-0-attachment)
I0624 21:48:16.125425       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:48:16.125950       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=16684935345 
I0624 21:48:16.126233       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=16685314151 
I0624 21:48:16.126402       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="50815053-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=16702061364 
I0624 21:48:16.126568       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.702330367 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" node="k8s-agentpool1-11903559-0" result_code="succeeded"
I0624 21:48:16.126691       1 utils.go:85] GRPC response: {}
I0624 21:48:16.128324       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-0" "disk.csi.azure.com/request-id"="33542b2f-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=12054654 
... skipping 14 lines ...
I0624 21:48:29.160052       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:30.014015       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:48:30.014037       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f"}
I0624 21:48:30.014272       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:48:30.014396       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:48:30.028565       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:48:30.028587       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:30.029870       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="62bcfd08-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=15397297 
I0624 21:48:30.040010       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:48:30.040285       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:30.046322       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=6922889 
I0624 21:48:30.046568       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:48:30.046759       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)" "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:48:30.047237       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)" "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:48:30.047455       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" 
I0624 21:48:30.047635       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=872511 
I0624 21:48:30.048727       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:48:30.048944       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:31.169964       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:33.179284       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:35.187221       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:35.326527       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f
I0624 21:48:35.326660       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5278750386 
I0624 21:48:35.337769       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=11054841 
I0624 21:48:35.337808       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5291077144 
I0624 21:48:35.337833       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="00d7f854-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5298473938 
I0624 21:48:35.340662       1 conditionwatcher.go:171] found a wait entry for object (pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f)
I0624 21:48:35.340676       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:48:35.340711       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="62bcfd08-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5304272912 
I0624 21:48:35.340738       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "disk.csi.azure.com/request-id"="62bcfd08-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" "latency"=5326333594 
I0624 21:48:35.340766       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f) returned with <nil>
I0624 21:48:35.340792       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.326507497 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ba1c4d1a-d6fc-4e96-a006-7af37ee41c3f" result_code="succeeded"
I0624 21:48:35.340805       1 utils.go:85] GRPC response: {}
I0624 21:48:37.195365       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:39.202230       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:40.000677       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:48:40.000787       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e","csi.storage.k8s.io/pvc/name":"pvc-9z2k9","csi.storage.k8s.io/pvc/namespace":"azuredisk-2902"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:48:40.001015       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:40.009863       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:40.010528       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:40.017900       1 conditionwatcher.go:171] found a wait entry for object (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:40.017924       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:40.018206       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=7931601 
I0624 21:48:40.018247       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:40.034288       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:48:40.034576       1 cloudprovisioner.go:246] begin to create disk(pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:48:40.122887       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e StorageAccountType:StandardSSD_LRS Size:10
I0624 21:48:41.212601       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:42.488570       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e StorageAccountType:StandardSSD_LRS Size:10
I0624 21:48:42.488640       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e kubernetes.io-created-for-pvc-name:pvc-9z2k9 kubernetes.io-created-for-pvc-namespace:azuredisk-2902]) successfully" "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:42.488669       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=2470335966 
I0624 21:48:42.501059       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=12348257 
I0624 21:48:42.501100       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=2482782425 
I0624 21:48:42.501123       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=2490859228 
I0624 21:48:42.502756       1 conditionwatcher.go:171] found a wait entry for object (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:42.502766       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:48:42.502797       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=2491882642 
I0624 21:48:42.502822       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=2501846369 
I0624 21:48:42.502855       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.501891969 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" result_code="succeeded"
I0624 21:48:42.502867       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e","csi.storage.k8s.io/pvc/name":"pvc-9z2k9","csi.storage.k8s.io/pvc/namespace":"azuredisk-2902","requestedsizegib":"10"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e"}}
I0624 21:48:43.220869       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:48:44.235623       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by pv-controller for AzVolume (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)" "disk.csi.azure.com/request-id"="6b36f61d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
... skipping 3 lines ...
I0624 21:48:44.237489       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:48:44.237914       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e"}
I0624 21:48:44.238004       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:44.238124       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:44.251932       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="6b375b7b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=13733075 
I0624 21:48:44.253415       1 conditionwatcher.go:171] found a wait entry for object (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:44.253444       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:44.261072       1 conditionwatcher.go:171] found a wait entry for object (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:44.261172       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:44.268297       1 conditionwatcher.go:171] found a wait entry for object (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:44.268831       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:44.268804       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=9556122 
I0624 21:48:44.269221       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:44.269421       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)" "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:44.269573       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)" "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:44.269735       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" 
I0624 21:48:44.269928       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=520207 
... skipping 3 lines ...
I0624 21:48:49.531782       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e
I0624 21:48:49.531865       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=5261843650 
I0624 21:48:49.543817       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=11885752 
I0624 21:48:49.543854       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=5274466811 
I0624 21:48:49.543880       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="68b0d18b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=5284660942 
I0624 21:48:49.544348       1 conditionwatcher.go:171] found a wait entry for object (pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e)
I0624 21:48:49.544380       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:48:49.544443       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="6b375b7b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=5282433513 
I0624 21:48:49.544485       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "disk.csi.azure.com/request-id"="6b375b7b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" "latency"=5306349718 
I0624 21:48:49.544494       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e) returned with <nil>
I0624 21:48:49.544520       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.306500121 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-ddad25fd-c903-4ff7-b060-b4deb1dd925e" result_code="succeeded"
I0624 21:48:49.544532       1 utils.go:85] GRPC response: {}
I0624 21:48:51.256878       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 25 lines ...
I0624 21:48:58.842080       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:48:58.876834       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:48:58.877192       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:48:58.877162       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=43486056 
I0624 21:48:58.877430       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:48:58.891195       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:48:58.891317       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:58.892907       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:48:58.893997       1 cloudprovisioner.go:246] begin to create disk(pvc-702936c8-510b-416e-ae83-ef3b3dd48539) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:48:58.895824       1 utils.go:78] GRPC call: /csi.v1.Controller/CreateVolume
I0624 21:48:58.896025       1 utils.go:79] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-a4f0efff-7524-436c-b648-c261c57da76f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-a4f0efff-7524-436c-b648-c261c57da76f","csi.storage.k8s.io/pvc/name":"pvc-5ppfp","csi.storage.k8s.io/pvc/namespace":"azuredisk-2035","skuname":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0624 21:48:58.896340       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:48:58.904805       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:48:58.904904       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:58.923752       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:48:58.923820       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:48:58.925215       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:48:58.926585       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:58.926974       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:48:58.927066       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:58.927689       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=35178950 
I0624 21:48:58.927785       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:48:58.937507       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-702936c8-510b-416e-ae83-ef3b3dd48539 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:48:58.944004       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:48:58.945008       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:48:58.946014       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=19731153 
I0624 21:48:58.946121       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:48:58.947765       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:48:58.947967       1 cloudprovisioner.go:246] begin to create disk(pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:48:58.961561       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:48:58.961817       1 cloudprovisioner.go:246] begin to create disk(pvc-a4f0efff-7524-436c-b648-c261c57da76f) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:48:59.001161       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:48:59.029701       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-a4f0efff-7524-436c-b648-c261c57da76f StorageAccountType:StandardSSD_LRS Size:10
I0624 21:48:59.287747       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:49:01.296822       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:49:01.342348       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-702936c8-510b-416e-ae83-ef3b3dd48539 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:49:01.342432       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-702936c8-510b-416e-ae83-ef3b3dd48539) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-702936c8-510b-416e-ae83-ef3b3dd48539 kubernetes.io-created-for-pvc-name:pvc-vc9pt kubernetes.io-created-for-pvc-namespace:azuredisk-2035]) successfully" "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:01.342472       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=2464918124 
I0624 21:49:01.351599       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:49:01.351620       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:01.351652       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=2511845125 
I0624 21:49:01.351850       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=2528540438 
I0624 21:49:01.351962       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.528648739 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539" result_code="succeeded"
I0624 21:49:01.352100       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-702936c8-510b-416e-ae83-ef3b3dd48539","csi.storage.k8s.io/pvc/name":"pvc-vc9pt","csi.storage.k8s.io/pvc/namespace":"azuredisk-2035","requestedsizegib":"10","skuname":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539"}}
I0624 21:49:01.352537       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=10026428 
I0624 21:49:01.352574       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=2475049954 
I0624 21:49:01.352601       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=2518943615 
I0624 21:49:01.383617       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 StorageAccountType:StandardSSD_LRS Size:10
I0624 21:49:01.383753       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 kubernetes.io-created-for-pvc-name:pvc-9rmq5 kubernetes.io-created-for-pvc-namespace:azuredisk-2035]) successfully" "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:01.383876       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=2455869209 
I0624 21:49:01.391448       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:49:01.391830       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:01.392046       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=2514712461 
I0624 21:49:01.392299       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=2550232516 
I0624 21:49:01.392483       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.550430118 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" result_code="succeeded"
I0624 21:49:01.392597       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41","csi.storage.k8s.io/pvc/name":"pvc-9rmq5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2035","requestedsizegib":"10","skuname":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41"}}
I0624 21:49:01.393048       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=9113716 
I0624 21:49:01.393125       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=2465199728 
... skipping 2 lines ...
I0624 21:49:01.991330       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-a4f0efff-7524-436c-b648-c261c57da76f) account type(StandardSSD_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-a4f0efff-7524-436c-b648-c261c57da76f kubernetes.io-created-for-pvc-name:pvc-5ppfp kubernetes.io-created-for-pvc-namespace:azuredisk-2035]) successfully" "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:01.991372       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=3044970243 
I0624 21:49:01.998924       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=7471196 
I0624 21:49:01.999323       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=3053037946 
I0624 21:49:01.999592       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=3073282505 
I0624 21:49:02.000209       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:49:02.000233       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:02.000266       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=3076288243 
I0624 21:49:02.000384       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=3103973298 
I0624 21:49:02.000439       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=3.1041658 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f" result_code="succeeded"
I0624 21:49:02.000566       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a4f0efff-7524-436c-b648-c261c57da76f","csi.storage.k8s.io/pvc/name":"pvc-5ppfp","csi.storage.k8s.io/pvc/namespace":"azuredisk-2035","requestedsizegib":"10","skuname":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f"}}
I0624 21:49:02.970466       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:49:02.970639       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f"}
I0624 21:49:02.973162       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:49:02.973183       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41"}
I0624 21:49:02.981082       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:02.984576       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:02.986057       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:49:02.986076       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539"}
I0624 21:49:02.987865       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:02.988074       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:02.992066       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=16230707 
I0624 21:49:02.992250       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:02.996445       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:02.996602       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:02.996939       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=13406871 
I0624 21:49:02.997139       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:02.998256       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:02.999169       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:02.999323       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:03.004737       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:03.004757       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:03.005785       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=6877588 
I0624 21:49:03.005821       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:03.132472       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:03.132517       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:03.132604       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:03.132626       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:03.132663       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:03.132686       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:03.305181       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:49:04.133711       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:49:04.133790       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41:AttachDiskOptions{diskName: "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41", lun: 0} /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539:AttachDiskOptions{diskName: "pvc-702936c8-510b-416e-ae83-ef3b3dd48539", lun: 2} /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f:AttachDiskOptions{diskName: "pvc-a4f0efff-7524-436c-b648-c261c57da76f", lun: 1}]
I0624 21:49:04.134632       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41:AttachDiskOptions{diskName: "pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41", lun: 0} /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539:AttachDiskOptions{diskName: "pvc-702936c8-510b-416e-ae83-ef3b3dd48539", lun: 2} /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f:AttachDiskOptions{diskName: "pvc-a4f0efff-7524-436c-b648-c261c57da76f", lun: 1}])
I0624 21:49:04.146415       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:04.146549       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:04.146652       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=1165504206 
I0624 21:49:04.146691       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=1165687608 
I0624 21:49:04.146715       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=1175523834 
I0624 21:49:04.146762       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.175641536 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:49:04.146776       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"1"}}
I0624 21:49:04.148787       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=14356484 
I0624 21:49:04.151622       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=17634725 
I0624 21:49:04.152071       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16999817 
I0624 21:49:04.157041       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:04.157062       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:04.157070       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:04.157074       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:04.157372       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=1172740199 
I0624 21:49:04.157526       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=1172934801 
I0624 21:49:04.157605       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=1184285546 
I0624 21:49:04.157636       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.184351347 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:49:04.157646       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"0"}}
I0624 21:49:04.157270       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=1158789320 
... skipping 71 lines ...
I0624 21:49:30.890280       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539"}
I0624 21:49:30.890550       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:30.899261       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41) scheduled in 5m0s." "disk.csi.azure.com/request-id"="87074600-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:30.900121       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.911696       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=20804267 
I0624 21:49:30.913300       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.914069       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:30.913667       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=14636887 
I0624 21:49:30.914429       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:30.914691       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:49:30.915140       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0624 21:49:30.915312       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f"}
I0624 21:49:30.915578       1 crdprovisioner.go:773]  "msg"="Requesting AzVolumeAttachment (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment) detachment" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:30.923350       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=7581797 
I0624 21:49:30.927027       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.930039       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.930221       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:30.930379       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-702936c8-510b-416e-ae83-ef3b3dd48539) scheduled in 5m0s." "disk.csi.azure.com/request-id"="870c0679-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:30.934651       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.935073       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.935187       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:30.935448       1 replica.go:150]  "msg"="Garbage collection of AzVolumeAttachments for AzVolume (pvc-a4f0efff-7524-436c-b648-c261c57da76f) scheduled in 5m0s." "disk.csi.azure.com/request-id"="870ccc4d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:30.940070       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.940128       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:30.941557       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=11315245 
I0624 21:49:30.941588       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:30.941769       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539 from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:30.947868       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=12869165 
I0624 21:49:30.947900       1 attach_detach.go:313]  "msg"="Detaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:30.948050       1 cloudprovisioner.go:467]  "msg"="Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f from node k8s-agentpool1-11903559-1" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:30.949531       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:30.949692       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:31.422581       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:49:31.998282       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="detach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:49:31.998424       1 azure_controller_common.go:405] azuredisk - trying to detach disks from node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41:pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539:pvc-702936c8-510b-416e-ae83-ef3b3dd48539 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f:pvc-a4f0efff-7524-436c-b648-c261c57da76f]
I0624 21:49:31.998506       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41
I0624 21:49:31.998517       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-a4f0efff-7524-436c-b648-c261c57da76f uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f
I0624 21:49:31.998525       1 azure_controller_standard.go:154] azureDisk - detach disk: name pvc-702936c8-510b-416e-ae83-ef3b3dd48539 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539
... skipping 22 lines ...
I0624 21:49:47.546226       1 cloudprovisioner.go:477]  "msg"="detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539 from node k8s-agentpool1-11903559-1 successfully" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:49:47.546398       1 cloudprovisioner.go:457]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16604680114 
I0624 21:49:47.561958       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=16662819 
I0624 21:49:47.562010       1 attach_detach.go:319]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach.func3" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=16614021039 
I0624 21:49:47.562036       1 workflow.go:149]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=16627082207 
I0624 21:49:47.565036       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:47.565052       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:47.565100       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16637977848 
I0624 21:49:47.565131       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16638108049 
I0624 21:49:47.565156       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="8705f2d1-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16674621217 
I0624 21:49:47.565194       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.674674917 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:49:47.565208       1 utils.go:85] GRPC response: {}
I0624 21:49:47.572894       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=26242646 
I0624 21:49:47.573059       1 attach_detach.go:319]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach.func3" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16631358566 
I0624 21:49:47.573117       1 workflow.go:149]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76641510-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16642896213 
I0624 21:49:47.573642       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=27639064 
I0624 21:49:47.573672       1 attach_detach.go:319]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach.func3" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=16659049120 
I0624 21:49:47.573694       1 workflow.go:149]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAttachDetach).triggerDetach" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=16674680620 
I0624 21:49:47.573940       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:47.573967       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:47.573975       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41-k8s-agentpool1-11903559-1-attachment)
I0624 21:49:47.573979       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:49:47.574012       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=16673698508 
I0624 21:49:47.574059       1 crdprovisioner.go:796]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="76621e7c-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForDetach" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=16673986112 
I0624 21:49:47.574098       1 crdprovisioner.go:675]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="87038e66-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).UnpublishVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=16699233135 
I0624 21:49:47.574139       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=16.699307936 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:49:47.574151       1 utils.go:85] GRPC response: {}
I0624 21:49:47.574232       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="7661cb93-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=16639389769 
... skipping 10 lines ...
I0624 21:49:55.272443       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:49:55.272874       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f"}
I0624 21:49:55.273223       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:49:55.273547       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:49:55.288295       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="958e8099-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=14547588 
I0624 21:49:55.289510       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:49:55.290342       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:55.299553       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:49:55.300707       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:55.330035       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=29726584 
I0624 21:49:55.331277       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:55.331534       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-a4f0efff-7524-436c-b648-c261c57da76f)" "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:55.331942       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-a4f0efff-7524-436c-b648-c261c57da76f)" "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:55.330913       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:49:55.332334       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:49:55.332288       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-a4f0efff-7524-436c-b648-c261c57da76f." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" 
I0624 21:49:55.332749       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=1217316 
I0624 21:49:55.379808       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:55.379845       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:55.379884       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:49:55.525331       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:49:57.533682       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:49:59.541697       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:50:00.569730       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f
I0624 21:50:00.570010       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=5236850113 
I0624 21:50:00.581722       1 conditionwatcher.go:171] found a wait entry for object (pvc-a4f0efff-7524-436c-b648-c261c57da76f)
I0624 21:50:00.581738       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:00.581803       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="958e8099-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=5281891604 
I0624 21:50:00.581855       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="958e8099-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=5308278845 
I0624 21:50:00.581864       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f) returned with <nil>
I0624 21:50:00.581907       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.30865335 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-a4f0efff-7524-436c-b648-c261c57da76f" result_code="succeeded"
I0624 21:50:00.581921       1 utils.go:85] GRPC response: {}
I0624 21:50:00.585845       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "disk.csi.azure.com/request-id"="73f404d4-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-a4f0efff-7524-436c-b648-c261c57da76f" "latency"=15753214 
... skipping 7 lines ...
I0624 21:50:03.097730       1 replica.go:156]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="87074600-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileReplica).triggerGarbageCollection" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=32198493029 
I0624 21:50:03.099048       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:50:03.099066       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41"}
I0624 21:50:03.099130       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:50:03.099176       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:50:03.106261       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:50:03.106274       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:03.111455       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="9a389934-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=12239267 
I0624 21:50:03.132046       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:50:03.132059       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:03.141911       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:50:03.141924       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:03.142621       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=11510257 
I0624 21:50:03.142644       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:50:03.142691       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)" "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:50:03.142707       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)" "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:50:03.142737       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" 
I0624 21:50:03.142788       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=99201 
... skipping 5 lines ...
I0624 21:50:08.384779       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41
I0624 21:50:08.384908       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=5242007935 
I0624 21:50:08.400205       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=15214827 
I0624 21:50:08.400401       1 azvolume.go:257]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete.func4" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=5257655669 
I0624 21:50:08.400530       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="73ebbd64-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerDelete" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=5269418429 
I0624 21:50:08.401171       1 conditionwatcher.go:171] found a wait entry for object (pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41)
I0624 21:50:08.401195       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:08.401336       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="9a389934-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=5269598932 
I0624 21:50:08.401475       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "disk.csi.azure.com/request-id"="9a389934-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" "latency"=5302185476 
I0624 21:50:08.401494       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41) returned with <nil>
I0624 21:50:08.401543       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.302374678 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-1fe37d46-f100-4e02-be1c-6e8fdce64f41" result_code="succeeded"
I0624 21:50:08.401556       1 utils.go:85] GRPC response: {}
I0624 21:50:09.584373       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
... skipping 5 lines ...
I0624 21:50:13.401750       1 replica.go:156]  "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="870c0679-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileReplica).triggerGarbageCollection" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=42471395305 
I0624 21:50:13.402178       1 utils.go:78] GRPC call: /csi.v1.Controller/DeleteVolume
I0624 21:50:13.402190       1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539"}
I0624 21:50:13.402307       1 controllerserver_v2.go:200] deleting disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:50:13.402360       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:50:13.412361       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:50:13.412378       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:13.415148       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="a05cbd78-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=12735591 
I0624 21:50:13.421836       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:50:13.421896       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:13.430934       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:50:13.430957       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:13.431735       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=10847362 
I0624 21:50:13.431785       1 azvolume.go:249]  "msg"="Deleting Volume..." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:50:13.431839       1 common.go:1683]  "msg"="AzVolumeAttachment clean up requested by azvolume-controller for AzVolume (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)" "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:50:13.431856       1 common.go:1788]  "msg"="Getting AzVolumeAttachment list for volume (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)" "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:50:13.431891       1 common.go:1817]  "msg"="Label selector is: disk.csi.azure.com/volume-name=pvc-702936c8-510b-416e-ae83-ef3b3dd48539." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" 
I0624 21:50:13.431937       1 common.go:1681]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).cleanUpAzVolumeAttachmentByVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=98102 
I0624 21:50:13.600914       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:50:15.609928       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:50:17.618444       1 leaderelection.go:278] successfully renewed lease kube-system/csi-azuredisk-controller
I0624 21:50:18.666610       1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539
I0624 21:50:18.666740       1 cloudprovisioner.go:328]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=5234687122 
I0624 21:50:18.682573       1 conditionwatcher.go:171] found a wait entry for object (pvc-702936c8-510b-416e-ae83-ef3b3dd48539)
I0624 21:50:18.682598       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:18.682637       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="a05cbd78-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=5260227047 
I0624 21:50:18.682686       1 crdprovisioner.go:306]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="a05cbd78-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).DeleteVolume" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=5280308946 
I0624 21:50:18.682699       1 controllerserver_v2.go:202] delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539) returned with <nil>
I0624 21:50:18.682729       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=5.280401848 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-702936c8-510b-416e-ae83-ef3b3dd48539" result_code="succeeded"
I0624 21:50:18.682742       1 utils.go:85] GRPC response: {}
I0624 21:50:18.683431       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "disk.csi.azure.com/request-id"="73e8e09d-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-702936c8-510b-416e-ae83-ef3b3dd48539" "latency"=16647289 
... skipping 19 lines ...
I0624 21:50:24.683334       1 crdprovisioner.go:234]  "msg"="Creating AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:24.693525       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" 
I0624 21:50:24.693628       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28)
I0624 21:50:24.695592       1 crdprovisioner.go:242]  "msg"="Successfully created AzVolume CRI" "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:24.695828       1 conditionwatcher.go:113] Adding a condition function for azvolume (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b)
I0624 21:50:24.700235       1 conditionwatcher.go:171] found a wait entry for object (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b)
I0624 21:50:24.700345       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:24.704607       1 conditionwatcher.go:171] found a wait entry for object (pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28)
I0624 21:50:24.704700       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:24.708126       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=15149504 
I0624 21:50:24.708271       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" 
I0624 21:50:24.710156       1 conditionwatcher.go:171] found a wait entry for object (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b)
I0624 21:50:24.710174       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:24.712113       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=15678394 
I0624 21:50:24.712181       1 azvolume.go:157]  "msg"="Creating Volume..." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:24.725047       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:50:24.725119       1 cloudprovisioner.go:246] begin to create disk(pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:50:24.726197       1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b' under resource group 'kubetest-ybmpahy2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}
I0624 21:50:24.726396       1 cloudprovisioner.go:246] begin to create disk(pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) selectedAvailabilityZone() maxShares(0)
I0624 21:50:24.771747       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b StorageAccountType:Premium_LRS Size:10
I0624 21:50:24.772176       1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28 StorageAccountType:Premium_LRS Size:10
I0624 21:50:25.380827       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:25.380828       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
I0624 21:50:25.380852       1 reflector.go:382] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:117: forcing resync
... skipping 2 lines ...
I0624 21:50:27.246639       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28 kubernetes.io-created-for-pvc-name:pvc-cvtm8 kubernetes.io-created-for-pvc-namespace:azuredisk-5351]) successfully" "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" 
I0624 21:50:27.246673       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=2538334849 
I0624 21:50:27.249511       1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b StorageAccountType:Premium_LRS Size:10
I0624 21:50:27.249775       1 cloudprovisioner.go:311]  "msg"="create disk(pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b) account type(Premium_LRS) rg(kubetest-ybmpahy2) location() size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b kubernetes.io-created-for-pvc-name:pvc-4d5k5 kubernetes.io-created-for-pvc-namespace:azuredisk-5351]) successfully" "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:27.249977       1 cloudprovisioner.go:145]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CloudProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=2537590871 
I0624 21:50:27.258051       1 conditionwatcher.go:171] found a wait entry for object (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b)
I0624 21:50:27.258094       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:27.258923       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=8765617 
I0624 21:50:27.259174       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=2546853193 
I0624 21:50:27.259211       1 workflow.go:149]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=2562784884 
I0624 21:50:27.258975       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=2562046990 
I0624 21:50:27.259259       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "disk.csi.azure.com/request-id"="a7160eb3-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=2576047226 
I0624 21:50:27.259292       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.576110426 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" result_code="succeeded"
I0624 21:50:27.259336       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b","csi.storage.k8s.io/pvc/name":"pvc-4d5k5","csi.storage.k8s.io/pvc/namespace":"azuredisk-5351","requestedsizegib":"10","skuname":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b"}}
I0624 21:50:27.263985       1 conditionwatcher.go:171] found a wait entry for object (pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28)
I0624 21:50:27.264135       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:27.264463       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=2570368608 
I0624 21:50:27.264876       1 crdprovisioner.go:159]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).CreateVolume" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=2604375358 
I0624 21:50:27.265139       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=2.60463576 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" result_code="succeeded"
I0624 21:50:27.265299       1 utils.go:85] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":{"Volume":{}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28","csi.storage.k8s.io/pvc/name":"pvc-cvtm8","csi.storage.k8s.io/pvc/namespace":"azuredisk-5351","requestedsizegib":"10","skuname":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28"}}
I0624 21:50:27.264829       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=18113842 
I0624 21:50:27.265717       1 azvolume.go:165]  "msg"="Workflow completed with success." "csi.storage.k8s.io/pv/name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "disk.csi.azure.com/request-id"="a7129af2-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcileAzVolume).triggerCreate.func3" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=2557393903 
... skipping 3 lines ...
I0624 21:50:27.732671       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28"}
I0624 21:50:27.733298       1 utils.go:78] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0624 21:50:27.733991       1 utils.go:79] GRPC request: {"node_id":"k8s-agentpool1-11903559-1","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"ReadWrite","fsType":"","kind":"Managed"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b"}
I0624 21:50:27.740576       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28-k8s-agentpool1-11903559-1-attachment)
I0624 21:50:27.745391       1 conditionwatcher.go:113] Adding a condition function for azvolumeattachments (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b-k8s-agentpool1-11903559-1-attachment)
I0624 21:50:27.746458       1 conditionwatcher.go:171] found a wait entry for object (pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28-k8s-agentpool1-11903559-1-attachment)
I0624 21:50:27.746801       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:27.746743       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=5652375 
I0624 21:50:27.747121       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" 
I0624 21:50:27.753485       1 conditionwatcher.go:171] found a wait entry for object (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b-k8s-agentpool1-11903559-1-attachment)
I0624 21:50:27.753506       1 conditionwatcher.go:179] condition result: succeeded: false, error: <nil>
I0624 21:50:27.753884       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=5970179 
I0624 21:50:27.753916       1 attach_detach.go:171]  "msg"="Attaching volume" "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:27.868287       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" 
I0624 21:50:27.868607       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" 
I0624 21:50:27.868399       1 cloudprovisioner.go:397]  "msg"="GetDiskLun returned: -1. Initiating attaching volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:27.868828       1 cloudprovisioner.go:411]  "msg"="Trying to attach volume \"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b\" to node \"k8s-agentpool1-11903559-1\"." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" 
I0624 21:50:28.869182       1 batch.go:224] "cloud-provider-azure: Delayed processing of batch due to start delay" type="batch" operation="attach_disk" key="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e|kubetest-ybmpahy2|k8s-agentpool1-11903559-1" delay="1s"
I0624 21:50:28.869247       1 azure_controller_common.go:306] azuredisk - trying to attach disks to node k8s-agentpool1-11903559-1: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28:AttachDiskOptions{diskName: "pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28", lun: 1} /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b:AttachDiskOptions{diskName: "pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b", lun: 0}]
I0624 21:50:28.869298       1 azure_controller_standard.go:97] azureDisk - update(kubetest-ybmpahy2): vm(k8s-agentpool1-11903559-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28:AttachDiskOptions{diskName: "pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28", lun: 1} /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b:AttachDiskOptions{diskName: "pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b", lun: 0}])
I0624 21:50:28.878512       1 conditionwatcher.go:171] found a wait entry for object (pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28-k8s-agentpool1-11903559-1-attachment)
I0624 21:50:28.878529       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:28.878569       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=1137779805 
I0624 21:50:28.878595       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=1138026007 
I0624 21:50:28.878650       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=1145157603 
I0624 21:50:28.878679       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.145266604 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" node="k8s-agentpool1-11903559-1" result_code="succeeded"
I0624 21:50:28.878690       1 utils.go:85] GRPC response: {"publish_context":{"LUN":"1"}}
I0624 21:50:28.879452       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e77db0-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-3b32dc59-9f52-460d-bdcb-93d99e1f3f28" "latency"=9542227 
I0624 21:50:28.883555       1 azure_disk_utils.go:905]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azureutils.UpdateCRIWithRetry" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=13941386 
I0624 21:50:28.898977       1 conditionwatcher.go:171] found a wait entry for object (pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b-k8s-agentpool1-11903559-1-attachment)
I0624 21:50:28.898999       1 conditionwatcher.go:179] condition result: succeeded: true, error: <nil>
I0624 21:50:28.899032       1 conditionwaiter.go:49]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=1152463800 
I0624 21:50:28.899073       1 crdprovisioner.go:574]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLun" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=1153692817 
I0624 21:50:28.899099       1 crdprovisioner.go:410]  "msg"="Workflow completed with success." "disk.csi.azure.com/node-name"="k8s-agentpool1-11903559-1" "disk.csi.azure.com/request-id"="a8e79b3b-f407-11ec-88aa-0022483e7c98" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).PublishVolume" "disk.csi.azure.com/volume-name"="pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" "latency"=1164878566 
I0624 21:50:28.899141       1 azure_metrics.go:114] "Observed Request Latency" latency_seconds=1.164937268 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-ybmpahy2" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-ybmpahy2/providers/Microsoft.Compute/disks/pvc-8085d26d-96b2-4cbb-ac4c-6fe92afb496b" node="k8s-agentpool1-11903559-1" result_