This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: feat: add fsGroupChangePolicy for nfs protocol
ResultNot Finished
Started2022-05-15 03:54
Revision
Refs 1013

Build Still Running!


Show 31 Passed Tests

Show 3 Skipped Tests

Error lines from build-log.txt

... skipping 673 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.2/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 132 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-gxfhvh-kubeconfig; do sleep 1; done"
capz-gxfhvh-kubeconfig                 cluster.x-k8s.io/secret               1      1s
# Get kubeconfig and store it locally.
kubectl get secrets capz-gxfhvh-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-gxfhvh-control-plane-465sh   NotReady   control-plane,master   1s    v1.22.1
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-gxfhvh-control-plane-465sh condition met
node/capz-gxfhvh-md-0-2wtxt condition met
... skipping 35 lines ...

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 11156  100 11156    0     0   198k      0 --:--:-- --:--:-- --:--:--  198k
Downloading https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
docker pull capzci.azurecr.io/azurefile-csi:e2e-f3af306c6b7eaacabc95cb898421921e264dc1de || make container-all push-manifest
Error response from daemon: manifest for capzci.azurecr.io/azurefile-csi:e2e-f3af306c6b7eaacabc95cb898421921e264dc1de not found: manifest unknown: manifest tagged by "e2e-f3af306c6b7eaacabc95cb898421921e264dc1de" is not found
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver'
CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-f3af306c6b7eaacabc95cb898421921e264dc1de -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=f3af306c6b7eaacabc95cb898421921e264dc1de -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2022-05-15T04:07:12Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/azurefileplugin.exe ./pkg/azurefileplugin
docker buildx rm container-builder || true
error: no builder "container-builder" found
docker buildx create --use --name=container-builder
container-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
... skipping 1798 lines ...
                    type: string
                type: object
                oneOf:
                - required: ["persistentVolumeClaimName"]
                - required: ["volumeSnapshotContentName"]
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 60 lines ...
                    type: string
                  volumeSnapshotContentName:
                    description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.
                    type: string
                type: object
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 254 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 108 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 938 lines ...
          image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0"
          args:
            - "-csi-address=$(ADDRESS)"
            - "-v=2"
            - "-leader-election"
            - "--leader-election-namespace=kube-system"
            - '-handle-volume-inuse-error=false'
            - '-timeout=120s'
            - '-feature-gates=RecoverVolumeExpansionFailure=true'
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          imagePullPolicy: IfNotPresent
... skipping 209 lines ...
Pre-Provisioned 
  should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77
STEP: Creating a kubernetes client
May 15 04:18:43.796: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
W0515 04:18:43.798103   36490 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret /: failed to get secret /: resource name may not be empty
I0515 04:18:43.799137   36490 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I0515 04:18:43.799161   36490 driver.go:93] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME
I0515 04:18:43.799166   36490 driver.go:93] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0515 04:18:43.799170   36490 driver.go:93] Enabling controller service capability: EXPAND_VOLUME
I0515 04:18:43.799174   36490 driver.go:93] Enabling controller service capability: SINGLE_NODE_MULTI_WRITER
I0515 04:18:43.799179   36490 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
... skipping 23 lines ...
May 15 04:19:11.022: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6pm8f] to have phase Bound
May 15 04:19:11.129: INFO: PersistentVolumeClaim pvc-6pm8f found and phase=Bound (106.944434ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 15 04:19:11.452: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-msxh9" in namespace "azurefile-8081" to be "Error status code"
May 15 04:19:11.559: INFO: Pod "azurefile-volume-tester-msxh9": Phase="Pending", Reason="", readiness=false. Elapsed: 107.170388ms
May 15 04:19:13.676: INFO: Pod "azurefile-volume-tester-msxh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223360399s
May 15 04:19:15.793: INFO: Pod "azurefile-volume-tester-msxh9": Phase="Failed", Reason="", readiness=false. Elapsed: 4.340869196s
STEP: Saw pod failure
May 15 04:19:15.793: INFO: Pod "azurefile-volume-tester-msxh9" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 15 04:19:15.924: INFO: deleting Pod "azurefile-8081"/"azurefile-volume-tester-msxh9"
May 15 04:19:16.034: INFO: Pod azurefile-volume-tester-msxh9 has the following logs: /bin/sh: can't create /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-msxh9 in namespace azurefile-8081
May 15 04:19:16.155: INFO: deleting PVC "azurefile-8081"/"pvc-6pm8f"
... skipping 37 lines ...
May 15 04:19:20.788: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-sbzns] to have phase Bound
May 15 04:19:20.895: INFO: PersistentVolumeClaim pvc-sbzns found and phase=Bound (106.860576ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:21.224: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-2mx6r" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 15 04:19:21.331: INFO: Pod "azurefile-volume-tester-2mx6r": Phase="Pending", Reason="", readiness=false. Elapsed: 106.920641ms
May 15 04:19:23.448: INFO: Pod "azurefile-volume-tester-2mx6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.223838979s
STEP: Saw pod success
May 15 04:19:23.448: INFO: Pod "azurefile-volume-tester-2mx6r" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 15 04:19:23.664: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tzzb7] to have phase Bound
May 15 04:19:23.771: INFO: PersistentVolumeClaim pvc-tzzb7 found and phase=Bound (106.889847ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:24.094: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-klhbh" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 15 04:19:24.202: INFO: Pod "azurefile-volume-tester-klhbh": Phase="Pending", Reason="", readiness=false. Elapsed: 107.694027ms
May 15 04:19:26.316: INFO: Pod "azurefile-volume-tester-klhbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.221587028s
STEP: Saw pod success
May 15 04:19:26.316: INFO: Pod "azurefile-volume-tester-klhbh" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 15 04:19:26.531: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-j59hh] to have phase Bound
May 15 04:19:26.638: INFO: PersistentVolumeClaim pvc-j59hh found and phase=Bound (107.028697ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:26.962: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-bhf6p" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 15 04:19:27.068: INFO: Pod "azurefile-volume-tester-bhf6p": Phase="Pending", Reason="", readiness=false. Elapsed: 106.611872ms
May 15 04:19:29.182: INFO: Pod "azurefile-volume-tester-bhf6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22005041s
STEP: Saw pod success
May 15 04:19:29.182: INFO: Pod "azurefile-volume-tester-bhf6p" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 15 04:19:29.398: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-bd69t] to have phase Bound
May 15 04:19:29.504: INFO: PersistentVolumeClaim pvc-bd69t found and phase=Bound (106.959976ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:29.827: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-qz6mj" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 15 04:19:29.935: INFO: Pod "azurefile-volume-tester-qz6mj": Phase="Pending", Reason="", readiness=false. Elapsed: 107.199523ms
May 15 04:19:32.049: INFO: Pod "azurefile-volume-tester-qz6mj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222008158s
STEP: Saw pod success
May 15 04:19:32.050: INFO: Pod "azurefile-volume-tester-qz6mj" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 15 04:19:32.266: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-r9lhk] to have phase Bound
May 15 04:19:32.373: INFO: PersistentVolumeClaim pvc-r9lhk found and phase=Bound (106.656802ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:32.696: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-s94zb" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 15 04:19:32.804: INFO: Pod "azurefile-volume-tester-s94zb": Phase="Pending", Reason="", readiness=false. Elapsed: 108.458239ms
May 15 04:19:34.919: INFO: Pod "azurefile-volume-tester-s94zb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.223555061s
STEP: Saw pod success
May 15 04:19:34.920: INFO: Pod "azurefile-volume-tester-s94zb" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 15 04:19:35.138: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xmj4b] to have phase Bound
May 15 04:19:35.245: INFO: PersistentVolumeClaim pvc-xmj4b found and phase=Bound (107.201066ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:35.573: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-rw79s" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 15 04:19:35.700: INFO: Pod "azurefile-volume-tester-rw79s": Phase="Pending", Reason="", readiness=false. Elapsed: 127.023299ms
May 15 04:19:37.814: INFO: Pod "azurefile-volume-tester-rw79s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.241231388s
STEP: Saw pod success
May 15 04:19:37.814: INFO: Pod "azurefile-volume-tester-rw79s" satisfied condition "Succeeded or Failed"
May 15 04:19:37.814: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-rw79s"
May 15 04:19:37.924: INFO: Pod azurefile-volume-tester-rw79s has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-rw79s in namespace azurefile-2540
May 15 04:19:38.154: INFO: deleting PVC "azurefile-2540"/"pvc-xmj4b"
May 15 04:19:38.154: INFO: Deleting PersistentVolumeClaim "pvc-xmj4b"
... skipping 143 lines ...
May 15 04:19:50.592: INFO: PersistentVolumeClaim pvc-v9p7k found but phase is Pending instead of Bound.
May 15 04:19:52.701: INFO: PersistentVolumeClaim pvc-v9p7k found and phase=Bound (2.21530026s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:19:53.028: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-57ct5" in namespace "azurefile-5466" to be "Succeeded or Failed"
May 15 04:19:53.135: INFO: Pod "azurefile-volume-tester-57ct5": Phase="Pending", Reason="", readiness=false. Elapsed: 107.48763ms
May 15 04:19:55.249: INFO: Pod "azurefile-volume-tester-57ct5": Phase="Running", Reason="", readiness=true. Elapsed: 2.2212206s
May 15 04:19:57.364: INFO: Pod "azurefile-volume-tester-57ct5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336237764s
STEP: Saw pod success
May 15 04:19:57.364: INFO: Pod "azurefile-volume-tester-57ct5" satisfied condition "Succeeded or Failed"
May 15 04:19:57.364: INFO: deleting Pod "azurefile-5466"/"azurefile-volume-tester-57ct5"
May 15 04:19:57.475: INFO: Pod azurefile-volume-tester-57ct5 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-57ct5 in namespace azurefile-5466
May 15 04:19:57.905: INFO: deleting PVC "azurefile-5466"/"pvc-v9p7k"
May 15 04:19:57.905: INFO: Deleting PersistentVolumeClaim "pvc-v9p7k"
... skipping 33 lines ...
May 15 04:20:02.162: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vc8d7] to have phase Bound
May 15 04:20:02.268: INFO: PersistentVolumeClaim pvc-vc8d7 found and phase=Bound (106.674616ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:20:02.591: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-lznxh" in namespace "azurefile-2790" to be "Succeeded or Failed"
May 15 04:20:02.993: INFO: Pod "azurefile-volume-tester-lznxh": Phase="Pending", Reason="", readiness=false. Elapsed: 401.677709ms
May 15 04:20:05.106: INFO: Pod "azurefile-volume-tester-lznxh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514586301s
May 15 04:20:07.219: INFO: Pod "azurefile-volume-tester-lznxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.628068835s
STEP: Saw pod success
May 15 04:20:07.219: INFO: Pod "azurefile-volume-tester-lznxh" satisfied condition "Succeeded or Failed"
May 15 04:20:07.219: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-lznxh"
May 15 04:20:07.343: INFO: Pod azurefile-volume-tester-lznxh has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-lznxh in namespace azurefile-2790
May 15 04:20:07.461: INFO: deleting PVC "azurefile-2790"/"pvc-vc8d7"
May 15 04:20:07.461: INFO: Deleting PersistentVolumeClaim "pvc-vc8d7"
... skipping 134 lines ...
May 15 04:22:19.891: INFO: PersistentVolumeClaim pvc-2xllp found but phase is Pending instead of Bound.
May 15 04:22:21.999: INFO: PersistentVolumeClaim pvc-2xllp found and phase=Bound (1m41.359743007s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:22:22.324: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-twtj6" in namespace "azurefile-5194" to be "Succeeded or Failed"
May 15 04:22:22.432: INFO: Pod "azurefile-volume-tester-twtj6": Phase="Pending", Reason="", readiness=false. Elapsed: 107.494301ms
May 15 04:22:24.547: INFO: Pod "azurefile-volume-tester-twtj6": Phase="Running", Reason="", readiness=true. Elapsed: 2.222269604s
May 15 04:22:26.662: INFO: Pod "azurefile-volume-tester-twtj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.337802134s
STEP: Saw pod success
May 15 04:22:26.662: INFO: Pod "azurefile-volume-tester-twtj6" satisfied condition "Succeeded or Failed"
May 15 04:22:26.662: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-twtj6"
May 15 04:22:26.782: INFO: Pod azurefile-volume-tester-twtj6 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-twtj6 in namespace azurefile-5194
May 15 04:22:26.903: INFO: deleting PVC "azurefile-5194"/"pvc-2xllp"
May 15 04:22:26.903: INFO: Deleting PersistentVolumeClaim "pvc-2xllp"
... skipping 34 lines ...
May 15 04:22:29.648: INFO: PersistentVolumeClaim pvc-qwrks found but phase is Pending instead of Bound.
May 15 04:22:31.757: INFO: PersistentVolumeClaim pvc-qwrks found and phase=Bound (2.216590392s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:22:32.082: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-wrh6h" in namespace "azurefile-1353" to be "Succeeded or Failed"
May 15 04:22:32.190: INFO: Pod "azurefile-volume-tester-wrh6h": Phase="Pending", Reason="", readiness=false. Elapsed: 108.075158ms
May 15 04:22:34.304: INFO: Pod "azurefile-volume-tester-wrh6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222398536s
STEP: Saw pod success
May 15 04:22:34.304: INFO: Pod "azurefile-volume-tester-wrh6h" satisfied condition "Succeeded or Failed"
May 15 04:22:34.304: INFO: deleting Pod "azurefile-1353"/"azurefile-volume-tester-wrh6h"
May 15 04:22:34.418: INFO: Pod azurefile-volume-tester-wrh6h has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-wrh6h in namespace azurefile-1353
May 15 04:22:34.539: INFO: deleting PVC "azurefile-1353"/"pvc-qwrks"
May 15 04:22:34.539: INFO: Deleting PersistentVolumeClaim "pvc-qwrks"
... skipping 128 lines ...
May 15 04:24:26.445: INFO: PersistentVolumeClaim pvc-nz44v found but phase is Pending instead of Bound.
May 15 04:24:28.554: INFO: PersistentVolumeClaim pvc-nz44v found and phase=Bound (21.206213642s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 15 04:24:28.881: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-fpf46" in namespace "azurefile-156" to be "Error status code"
May 15 04:24:28.989: INFO: Pod "azurefile-volume-tester-fpf46": Phase="Pending", Reason="", readiness=false. Elapsed: 108.08016ms
May 15 04:24:31.104: INFO: Pod "azurefile-volume-tester-fpf46": Phase="Failed", Reason="", readiness=false. Elapsed: 2.222286246s
STEP: Saw pod failure
May 15 04:24:31.104: INFO: Pod "azurefile-volume-tester-fpf46" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 15 04:24:31.218: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-fpf46"
May 15 04:24:31.347: INFO: Pod azurefile-volume-tester-fpf46 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-fpf46 in namespace azurefile-156
May 15 04:24:31.466: INFO: deleting PVC "azurefile-156"/"pvc-nz44v"
... skipping 195 lines ...
May 15 04:26:25.222: INFO: PersistentVolumeClaim pvc-8qrv9 found but phase is Pending instead of Bound.
May 15 04:26:27.330: INFO: PersistentVolumeClaim pvc-8qrv9 found and phase=Bound (2.21887357s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:26:27.657: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-6hcxn" in namespace "azurefile-2546" to be "Succeeded or Failed"
May 15 04:26:27.766: INFO: Pod "azurefile-volume-tester-6hcxn": Phase="Pending", Reason="", readiness=false. Elapsed: 108.618937ms
May 15 04:26:29.880: INFO: Pod "azurefile-volume-tester-6hcxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.223357097s
STEP: Saw pod success
May 15 04:26:29.880: INFO: Pod "azurefile-volume-tester-6hcxn" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
May 15 04:27:00.719: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-6hcxn"
... skipping 39 lines ...
May 15 04:27:03.708: INFO: PersistentVolumeClaim pvc-bh9z4 found but phase is Pending instead of Bound.
May 15 04:27:05.816: INFO: PersistentVolumeClaim pvc-bh9z4 found and phase=Bound (2.216387349s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:27:06.143: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-46v5m" in namespace "azurefile-1598" to be "Succeeded or Failed"
May 15 04:27:06.252: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Pending", Reason="", readiness=false. Elapsed: 108.954108ms
May 15 04:27:08.361: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217701763s
May 15 04:27:10.472: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329675945s
May 15 04:27:12.581: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438409817s
May 15 04:27:14.690: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547112461s
May 15 04:27:16.799: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655735824s
May 15 04:27:18.914: INFO: Pod "azurefile-volume-tester-46v5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.770829571s
STEP: Saw pod success
May 15 04:27:18.914: INFO: Pod "azurefile-volume-tester-46v5m" satisfied condition "Succeeded or Failed"
May 15 04:27:18.914: INFO: deleting Pod "azurefile-1598"/"azurefile-volume-tester-46v5m"
May 15 04:27:19.027: INFO: Pod azurefile-volume-tester-46v5m has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-46v5m in namespace azurefile-1598
May 15 04:27:19.147: INFO: deleting PVC "azurefile-1598"/"pvc-bh9z4"
May 15 04:27:19.147: INFO: Deleting PersistentVolumeClaim "pvc-bh9z4"
... skipping 36 lines ...
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
May 15 04:27:26.576: INFO: deleting Pod "azurefile-3410"/"azurefile-volume-tester-l26mx"
May 15 04:27:26.686: INFO: Error getting logs for pod azurefile-volume-tester-l26mx: the server rejected our request for an unknown reason (get pods azurefile-volume-tester-l26mx)
STEP: Deleting pod azurefile-volume-tester-l26mx in namespace azurefile-3410
May 15 04:27:26.799: INFO: deleting PVC "azurefile-3410"/"pvc-bstmx"
May 15 04:27:26.799: INFO: Deleting PersistentVolumeClaim "pvc-bstmx"
STEP: waiting for claim's PV "pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf" to be deleted
May 15 04:27:27.128: INFO: Waiting up to 10m0s for PersistentVolume pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf to get deleted
May 15 04:27:27.236: INFO: PersistentVolume pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf found and phase=Bound (108.04752ms)
... skipping 57 lines ...
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
May 15 04:29:37.262: INFO: deleting Pod "azurefile-8582"/"azurefile-volume-tester-9mth5"
May 15 04:29:37.386: INFO: Error getting logs for pod azurefile-volume-tester-9mth5: the server rejected our request for an unknown reason (get pods azurefile-volume-tester-9mth5)
STEP: Deleting pod azurefile-volume-tester-9mth5 in namespace azurefile-8582
May 15 04:29:37.496: INFO: deleting PVC "azurefile-8582"/"pvc-78zsr"
May 15 04:29:37.496: INFO: Deleting PersistentVolumeClaim "pvc-78zsr"
STEP: waiting for claim's PV "pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4" to be deleted
May 15 04:29:37.824: INFO: Waiting up to 10m0s for PersistentVolume pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 to get deleted
May 15 04:29:37.932: INFO: PersistentVolume pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 found and phase=Bound (108.249094ms)
... skipping 138 lines ...
May 15 04:33:06.922: INFO: PersistentVolumeClaim pvc-b45ch found but phase is Pending instead of Bound.
May 15 04:33:09.031: INFO: PersistentVolumeClaim pvc-b45ch found and phase=Bound (2.217848134s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 15 04:33:09.358: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-h5mwb" in namespace "azurefile-3086" to be "Error status code"
May 15 04:33:09.466: INFO: Pod "azurefile-volume-tester-h5mwb": Phase="Pending", Reason="", readiness=false. Elapsed: 108.30749ms
May 15 04:33:11.577: INFO: Pod "azurefile-volume-tester-h5mwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218513146s
May 15 04:33:13.692: INFO: Pod "azurefile-volume-tester-h5mwb": Phase="Running", Reason="", readiness=true. Elapsed: 4.333478198s
May 15 04:33:15.809: INFO: Pod "azurefile-volume-tester-h5mwb": Phase="Failed", Reason="", readiness=false. Elapsed: 6.450486802s
STEP: Saw pod failure
May 15 04:33:15.809: INFO: Pod "azurefile-volume-tester-h5mwb" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 15 04:33:15.922: INFO: deleting Pod "azurefile-3086"/"azurefile-volume-tester-h5mwb"
May 15 04:33:16.033: INFO: Pod azurefile-volume-tester-h5mwb has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-h5mwb in namespace azurefile-3086
May 15 04:33:16.153: INFO: deleting PVC "azurefile-3086"/"pvc-b45ch"
... skipping 196 lines ...
May 15 04:33:45.304: INFO: PersistentVolumeClaim pvc-j7f6h found but phase is Pending instead of Bound.
May 15 04:33:47.413: INFO: PersistentVolumeClaim pvc-j7f6h found and phase=Bound (2.217798595s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:33:47.741: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-d6hsc" in namespace "azurefile-9183" to be "Succeeded or Failed"
May 15 04:33:47.855: INFO: Pod "azurefile-volume-tester-d6hsc": Phase="Pending", Reason="", readiness=false. Elapsed: 113.945506ms
May 15 04:33:49.970: INFO: Pod "azurefile-volume-tester-d6hsc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228844527s
May 15 04:33:52.085: INFO: Pod "azurefile-volume-tester-d6hsc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.344159817s
STEP: Saw pod success
May 15 04:33:52.086: INFO: Pod "azurefile-volume-tester-d6hsc" satisfied condition "Succeeded or Failed"
May 15 04:33:52.086: INFO: deleting Pod "azurefile-9183"/"azurefile-volume-tester-d6hsc"
May 15 04:33:52.197: INFO: Pod azurefile-volume-tester-d6hsc has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-d6hsc in namespace azurefile-9183
May 15 04:33:52.319: INFO: deleting PVC "azurefile-9183"/"pvc-j7f6h"
May 15 04:33:52.319: INFO: Deleting PersistentVolumeClaim "pvc-j7f6h"
... skipping 74 lines ...
May 15 04:33:58.379: INFO: PersistentVolumeClaim pvc-lftdr found but phase is Pending instead of Bound.
May 15 04:34:00.489: INFO: PersistentVolumeClaim pvc-lftdr found and phase=Bound (2.217876226s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
May 15 04:34:00.815: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-6t99j" in namespace "azurefile-7578" to be "Succeeded or Failed"
May 15 04:34:00.923: INFO: Pod "azurefile-volume-tester-6t99j": Phase="Pending", Reason="", readiness=false. Elapsed: 108.101118ms
May 15 04:34:03.038: INFO: Pod "azurefile-volume-tester-6t99j": Phase="Running", Reason="", readiness=true. Elapsed: 2.223217044s
May 15 04:34:05.153: INFO: Pod "azurefile-volume-tester-6t99j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.338260128s
STEP: Saw pod success
May 15 04:34:05.153: INFO: Pod "azurefile-volume-tester-6t99j" satisfied condition "Succeeded or Failed"
STEP: creating volume snapshot class
STEP: setting up the VolumeSnapshotClass
STEP: creating a VolumeSnapshotClass
STEP: taking snapshots
STEP: creating a VolumeSnapshot for pvc-lftdr
STEP: waiting for VolumeSnapshot to be ready to use - volume-snapshot-zcxw2
... skipping 32 lines ...
check the driver pods if restarts ...
======================================================================================
2022/05/15 04:34:23 Check successfully
May 15 04:34:23.287: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
2022/05/15 04:34:23 run script: test/utils/get_storage_account_secret_name.sh
2022/05/15 04:34:23 got output: azure-storage-account-fc0766d5d43dd4201a264f7-secret
, error: <nil>
2022/05/15 04:34:23 got storage account secret name: azure-storage-account-fc0766d5d43dd4201a264f7-secret
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 15 04:34:23.960: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-bsbq6] to have phase Bound
May 15 04:34:24.074: INFO: PersistentVolumeClaim pvc-bsbq6 found but phase is Pending instead of Bound.
May 15 04:34:26.183: INFO: PersistentVolumeClaim pvc-bsbq6 found and phase=Bound (2.22303669s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:34:26.510: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-k99g5" in namespace "azurefile-1968" to be "Succeeded or Failed"
May 15 04:34:26.619: INFO: Pod "azurefile-volume-tester-k99g5": Phase="Pending", Reason="", readiness=false. Elapsed: 108.530524ms
May 15 04:34:28.734: INFO: Pod "azurefile-volume-tester-k99g5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.223972381s
STEP: Saw pod success
May 15 04:34:28.734: INFO: Pod "azurefile-volume-tester-k99g5" satisfied condition "Succeeded or Failed"
May 15 04:34:28.734: INFO: deleting Pod "azurefile-1968"/"azurefile-volume-tester-k99g5"
May 15 04:34:28.846: INFO: Pod azurefile-volume-tester-k99g5 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-k99g5 in namespace azurefile-1968
May 15 04:34:28.968: INFO: deleting PVC "azurefile-1968"/"pvc-bsbq6"
May 15 04:34:28.968: INFO: Deleting PersistentVolumeClaim "pvc-bsbq6"
... skipping 43 lines ...
May 15 04:34:50.700: INFO: PersistentVolumeClaim pvc-nkpqj found but phase is Pending instead of Bound.
May 15 04:34:52.809: INFO: PersistentVolumeClaim pvc-nkpqj found and phase=Bound (21.198518273s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:34:53.146: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-dwrqv" in namespace "azurefile-4657" to be "Succeeded or Failed"
May 15 04:34:53.254: INFO: Pod "azurefile-volume-tester-dwrqv": Phase="Pending", Reason="", readiness=false. Elapsed: 108.240645ms
May 15 04:34:55.370: INFO: Pod "azurefile-volume-tester-dwrqv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224310284s
May 15 04:34:57.486: INFO: Pod "azurefile-volume-tester-dwrqv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.340156458s
STEP: Saw pod success
May 15 04:34:57.486: INFO: Pod "azurefile-volume-tester-dwrqv" satisfied condition "Succeeded or Failed"
May 15 04:34:57.486: INFO: deleting Pod "azurefile-4657"/"azurefile-volume-tester-dwrqv"
May 15 04:34:57.603: INFO: Pod azurefile-volume-tester-dwrqv has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-dwrqv in namespace azurefile-4657
May 15 04:34:57.726: INFO: deleting PVC "azurefile-4657"/"pvc-nkpqj"
May 15 04:34:57.726: INFO: Deleting PersistentVolumeClaim "pvc-nkpqj"
... skipping 69 lines ...
check the driver pods if restarts ...
======================================================================================
2022/05/15 04:36:15 Check successfully
May 15 04:36:15.551: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
2022/05/15 04:36:15 run script: test/utils/get_storage_account_secret_name.sh
2022/05/15 04:36:15 got output: azure-storage-account-fc0766d5d43dd4201a264f7-secret
, error: <nil>
2022/05/15 04:36:15 got storage account secret name: azure-storage-account-fc0766d5d43dd4201a264f7-secret
STEP: Successfully provisioned AzureFile volume: "capz-gxfhvh#fc0766d5d43dd4201a264f7#csi-inline-smb-volume##csi-inline-smb-volume#azurefile-4162"

STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:36:17.691: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-88lw9" in namespace "azurefile-4162" to be "Succeeded or Failed"
May 15 04:36:17.799: INFO: Pod "azurefile-volume-tester-88lw9": Phase="Pending", Reason="", readiness=false. Elapsed: 108.432876ms
May 15 04:36:19.915: INFO: Pod "azurefile-volume-tester-88lw9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.224246114s
STEP: Saw pod success
May 15 04:36:19.915: INFO: Pod "azurefile-volume-tester-88lw9" satisfied condition "Succeeded or Failed"
May 15 04:36:19.915: INFO: deleting Pod "azurefile-4162"/"azurefile-volume-tester-88lw9"
May 15 04:36:20.032: INFO: Pod azurefile-volume-tester-88lw9 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-88lw9 in namespace azurefile-4162
May 15 04:36:20.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azurefile-4162" for this suite.
... skipping 42 lines ...
check the driver pods if restarts ...
======================================================================================
2022/05/15 04:36:23 Check successfully
May 15 04:36:23.686: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: creating secret smbcreds in namespace azurefile-5320
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:36:23.908: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-dg22z" in namespace "azurefile-5320" to be "Succeeded or Failed"
May 15 04:36:24.017: INFO: Pod "azurefile-volume-tester-dg22z": Phase="Pending", Reason="", readiness=false. Elapsed: 108.356438ms
May 15 04:36:26.131: INFO: Pod "azurefile-volume-tester-dg22z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222807798s
May 15 04:36:28.246: INFO: Pod "azurefile-volume-tester-dg22z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.337942201s
STEP: Saw pod success
May 15 04:36:28.246: INFO: Pod "azurefile-volume-tester-dg22z" satisfied condition "Succeeded or Failed"
May 15 04:36:28.247: INFO: deleting Pod "azurefile-5320"/"azurefile-volume-tester-dg22z"
May 15 04:36:28.371: INFO: Pod azurefile-volume-tester-dg22z has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-dg22z in namespace azurefile-5320
May 15 04:36:28.494: INFO: deleting Secret smbcreds
May 15 04:36:28.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 52 lines ...
May 15 04:37:23.403: INFO: PersistentVolumeClaim pvc-v2dwr found but phase is Pending instead of Bound.
May 15 04:37:25.530: INFO: PersistentVolumeClaim pvc-v2dwr found and phase=Bound (54.964621345s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:37:25.856: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-w6q8d" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 15 04:37:25.964: INFO: Pod "azurefile-volume-tester-w6q8d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.389658ms
May 15 04:37:28.079: INFO: Pod "azurefile-volume-tester-w6q8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222855983s
May 15 04:37:30.193: INFO: Pod "azurefile-volume-tester-w6q8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.337664321s
STEP: Saw pod success
May 15 04:37:30.193: INFO: Pod "azurefile-volume-tester-w6q8d" satisfied condition "Succeeded or Failed"
May 15 04:37:30.193: INFO: deleting Pod "azurefile-9103"/"azurefile-volume-tester-w6q8d"
May 15 04:37:30.304: INFO: Pod azurefile-volume-tester-w6q8d has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-w6q8d in namespace azurefile-9103
May 15 04:37:30.423: INFO: deleting PVC "azurefile-9103"/"pvc-v2dwr"
May 15 04:37:30.423: INFO: Deleting PersistentVolumeClaim "pvc-v2dwr"
... skipping 78 lines ...
May 15 04:39:06.066: INFO: PersistentVolumeClaim pvc-j4qxz found but phase is Pending instead of Bound.
May 15 04:39:08.175: INFO: PersistentVolumeClaim pvc-j4qxz found and phase=Bound (1m35.125308696s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:39:08.512: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-kbfzk" in namespace "azurefile-8652" to be "Succeeded or Failed"
May 15 04:39:08.620: INFO: Pod "azurefile-volume-tester-kbfzk": Phase="Pending", Reason="", readiness=false. Elapsed: 108.325243ms
May 15 04:39:10.730: INFO: Pod "azurefile-volume-tester-kbfzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218175387s
May 15 04:39:12.840: INFO: Pod "azurefile-volume-tester-kbfzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327644247s
May 15 04:39:14.949: INFO: Pod "azurefile-volume-tester-kbfzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436836039s
May 15 04:39:17.059: INFO: Pod "azurefile-volume-tester-kbfzk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546664898s
May 15 04:39:19.174: INFO: Pod "azurefile-volume-tester-kbfzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.661840252s
STEP: Saw pod success
May 15 04:39:19.174: INFO: Pod "azurefile-volume-tester-kbfzk" satisfied condition "Succeeded or Failed"
May 15 04:39:19.174: INFO: deleting Pod "azurefile-8652"/"azurefile-volume-tester-kbfzk"
May 15 04:39:19.293: INFO: Pod azurefile-volume-tester-kbfzk has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-kbfzk in namespace azurefile-8652
May 15 04:39:19.418: INFO: deleting PVC "azurefile-8652"/"pvc-j4qxz"
May 15 04:39:19.418: INFO: Deleting PersistentVolumeClaim "pvc-j4qxz"
... skipping 89 lines ...
May 15 04:39:35.454: INFO: PersistentVolumeClaim pvc-c2fsj found but phase is Pending instead of Bound.
May 15 04:39:37.563: INFO: PersistentVolumeClaim pvc-c2fsj found and phase=Bound (2.218106497s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 15 04:39:37.891: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-dqhbs" in namespace "azurefile-8470" to be "Succeeded or Failed"
May 15 04:39:38.005: INFO: Pod "azurefile-volume-tester-dqhbs": Phase="Pending", Reason="", readiness=false. Elapsed: 114.060164ms
May 15 04:39:40.119: INFO: Pod "azurefile-volume-tester-dqhbs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228025664s
May 15 04:39:42.235: INFO: Pod "azurefile-volume-tester-dqhbs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.343882686s
STEP: Saw pod success
May 15 04:39:42.235: INFO: Pod "azurefile-volume-tester-dqhbs" satisfied condition "Succeeded or Failed"
May 15 04:39:42.235: INFO: deleting Pod "azurefile-8470"/"azurefile-volume-tester-dqhbs"
May 15 04:39:42.346: INFO: Pod azurefile-volume-tester-dqhbs has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-dqhbs in namespace azurefile-8470
May 15 04:39:42.468: INFO: deleting PVC "azurefile-8470"/"pvc-c2fsj"
May 15 04:39:42.468: INFO: Deleting PersistentVolumeClaim "pvc-c2fsj"
... skipping 156 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0515 04:18:38.902193       1 azurefile.go:274] driver userAgent: file.csi.azure.com/e2e-f3af306c6b7eaacabc95cb898421921e264dc1de gc/go1.18.1 (amd64-linux) e2e-test
I0515 04:18:38.902622       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0515 04:18:38.919074       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0515 04:18:38.919104       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0515 04:18:38.919231       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0515 04:18:38.919339       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0515 04:18:38.919895       1 azure_auth.go:245] Using AzurePublicCloud environment
I0515 04:18:38.919998       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0515 04:18:38.920115       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 72 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0515 04:18:38.544862       1 azurefile.go:274] driver userAgent: file.csi.azure.com/e2e-f3af306c6b7eaacabc95cb898421921e264dc1de gc/go1.18.1 (amd64-linux) e2e-test
I0515 04:18:38.545258       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0515 04:18:38.594515       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0515 04:18:38.594536       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0515 04:18:38.594548       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0515 04:18:38.594592       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0515 04:18:38.595106       1 azure_auth.go:245] Using AzurePublicCloud environment
I0515 04:18:38.595161       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0515 04:18:38.595222       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 514 lines ...
I0515 04:37:30.652103       1 azurefile.go:794] remove tag(skip-matching) on account(f0fbca06a82134096a7597a) resourceGroup(capz-gxfhvh)
I0515 04:37:30.718061       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.21602152 request="azurefile_csi_driver_controller_delete_volume" resource_group="capz-gxfhvh" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="file.csi.azure.com" volumeid="capz-gxfhvh#f0fbca06a82134096a7597a#pvcn-bb785061-8062-4eb8-8335-c701b4e6d575###azurefile-9103" result_code="succeeded"
I0515 04:37:30.718165       1 utils.go:83] GRPC response: {}
I0515 04:37:33.008459       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0515 04:37:33.008485       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":107374182400},"name":"pvc-4c8fd4b2-cf06-4d42-a732-8a3b51ae3cec","parameters":{"csi.storage.k8s.io/pv/name":"pvc-4c8fd4b2-cf06-4d42-a732-8a3b51ae3cec","csi.storage.k8s.io/pvc/name":"pvc-j4qxz","csi.storage.k8s.io/pvc/namespace":"azurefile-8652","fsGroupChangePolicy":"OnRootMismatch","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","skuName":"Premium_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}}]}
I0515 04:37:33.125989       1 azure_storageaccount.go:360] Creating private dns zone(privatelink.file.core.windows.net) in resourceGroup (capz-gxfhvh)
I0515 04:38:04.302262       1 azure_privatednsclient.go:56] Received error while waiting for completion for privatedns.put.request, resourceGroup: capz-gxfhvh, error: Code="PreconditionFailed" Message="The Zone privatelink.file.core.windows.net exists already and hence cannot be created again."
I0515 04:38:04.302314       1 azure_storageaccount.go:365] private dns zone(privatelink.file.core.windows.net) in resourceGroup (capz-gxfhvh) already exists
I0515 04:38:04.302324       1 azure_storageaccount.go:374] Creating virtual link for vnet(fc080560101684098a88170-vnetlink) and DNS Zone(privatelink.file.core.windows.net) in resourceGroup(capz-gxfhvh)
I0515 04:38:05.454933       1 azure_storageaccount.go:252] azure - no matching account found, begin to create a new account fc080560101684098a88170 in resource group capz-gxfhvh, location: westeurope, accountType: Premium_LRS, accountKind: FileStorage, tags: map[k8s-azure-created-by:azure]
I0515 04:38:05.454970       1 azure_storageaccount.go:273] set AllowBlobPublicAccess(false) for storage account(fc080560101684098a88170)
I0515 04:38:25.013664       1 azure_storageaccount.go:330] Creating private endpoint(fc080560101684098a88170-pvtendpoint) for account (fc080560101684098a88170)
I0515 04:39:05.965287       1 azure_storageaccount.go:387] Creating private DNS zone group(fc080560101684098a88170-dnszonegroup) with privateEndpoint(fc080560101684098a88170-pvtendpoint), vNetName(capz-gxfhvh-vnet), resourceGroup(capz-gxfhvh)
... skipping 143 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0515 04:18:34.075818       1 azurefile.go:274] driver userAgent: file.csi.azure.com/e2e-f3af306c6b7eaacabc95cb898421921e264dc1de gc/go1.18.1 (amd64-linux) e2e-test
I0515 04:18:34.076253       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0515 04:18:34.089243       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0515 04:18:34.089262       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0515 04:18:34.089272       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0515 04:18:34.089300       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0515 04:18:34.089852       1 azure_auth.go:245] Using AzurePublicCloud environment
I0515 04:18:34.089895       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0515 04:18:34.089964       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 27 lines ...
I0515 04:18:35.662028       1 utils.go:76] GRPC call: /csi.v1.Node/NodeGetInfo
I0515 04:18:35.662049       1 utils.go:77] GRPC request: {}
I0515 04:18:35.662149       1 utils.go:83] GRPC response: {"node_id":"capz-gxfhvh-md-0-2wtxt"}
I0515 04:36:24.080529       1 utils.go:76] GRPC call: /csi.v1.Node/NodePublishVolume
I0515 04:36:24.080557       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/cc04dc75-3d17-4727-987f-90970d895fe4/volumes/kubernetes.io~csi/test-volume-1/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"azurefile-volume-tester-dg22z","csi.storage.k8s.io/pod.namespace":"azurefile-5320","csi.storage.k8s.io/pod.uid":"cc04dc75-3d17-4727-987f-90970d895fe4","csi.storage.k8s.io/serviceAccount.name":"default","mountOptions":"cache=singleclient","secretName":"smbcreds","server":"smb-server.default.svc.cluster.local","shareName":"share"},"volume_id":"csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc"}
I0515 04:36:24.080796       1 nodeserver.go:68] NodePublishVolume: ephemeral volume(csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc) mount on /var/lib/kubelet/pods/cc04dc75-3d17-4727-987f-90970d895fe4/volumes/kubernetes.io~csi/test-volume-1/mount, VolumeContext: map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-dg22z csi.storage.k8s.io/pod.namespace:azurefile-5320 csi.storage.k8s.io/pod.uid:cc04dc75-3d17-4727-987f-90970d895fe4 csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:smbcreds secretnamespace:azurefile-5320 server:smb-server.default.svc.cluster.local shareName:share storageaccount:]
W0515 04:36:24.080826       1 azurefile.go:564] parsing volumeID(csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc) return with error: error parsing volume id: "csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc", should at least contain two #
I0515 04:36:24.092415       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/pods/cc04dc75-3d17-4727-987f-90970d895fe4/volumes/kubernetes.io~csi/test-volume-1/mount) fstype() volumeID(csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc) context(map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-dg22z csi.storage.k8s.io/pod.namespace:azurefile-5320 csi.storage.k8s.io/pod.uid:cc04dc75-3d17-4727-987f-90970d895fe4 csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:smbcreds secretnamespace:azurefile-5320 server:smb-server.default.svc.cluster.local shareName:share storageaccount:]) mountflags([]) mountOptions([actimeo=30 cache=singleclient dir_mode=0777 file_mode=0777 mfsymlinks]) volumeMountGroup()
I0515 04:36:24.092754       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o actimeo=30,cache=singleclient,dir_mode=0777,file_mode=0777,mfsymlinks,<masked> //smb-server.default.svc.cluster.local/share /var/lib/kubelet/pods/cc04dc75-3d17-4727-987f-90970d895fe4/volumes/kubernetes.io~csi/test-volume-1/mount)
I0515 04:36:24.273934       1 nodeserver.go:319] volume(csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc) mount //smb-server.default.svc.cluster.local/share on /var/lib/kubelet/pods/cc04dc75-3d17-4727-987f-90970d895fe4/volumes/kubernetes.io~csi/test-volume-1/mount succeeded
I0515 04:36:24.273973       1 utils.go:83] GRPC response: {}
I0515 04:36:28.809033       1 utils.go:76] GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0515 04:36:28.809057       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/cc04dc75-3d17-4727-987f-90970d895fe4/volumes/kubernetes.io~csi/test-volume-1/mount","volume_id":"csi-d0cdffb48a8eb55be8b5a2693acbb64ab42b7bdb6e9e202993b0d493651fa0cc"}
... skipping 41 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0515 04:18:30.658299       1 azurefile.go:274] driver userAgent: file.csi.azure.com/e2e-f3af306c6b7eaacabc95cb898421921e264dc1de gc/go1.18.1 (amd64-linux) e2e-test
I0515 04:18:30.658669       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0515 04:18:30.667847       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0515 04:18:30.667876       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0515 04:18:30.667886       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0515 04:18:30.667911       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0515 04:18:30.668317       1 azure_auth.go:245] Using AzurePublicCloud environment
I0515 04:18:30.668354       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0515 04:18:30.668409       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 40 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0515 04:18:34.514326       1 azurefile.go:274] driver userAgent: file.csi.azure.com/e2e-f3af306c6b7eaacabc95cb898421921e264dc1de gc/go1.18.1 (amd64-linux) e2e-test
I0515 04:18:34.514689       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0515 04:18:34.527170       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0515 04:18:34.527190       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0515 04:18:34.527200       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0515 04:18:34.527228       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0515 04:18:34.527708       1 azure_auth.go:245] Using AzurePublicCloud environment
I0515 04:18:34.527751       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0515 04:18:34.527818       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 554 lines ...
I0515 04:27:24.704535       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:24.804754       1 mount_linux.go:490] Output: ""
I0515 04:27:24.804784       1 mount_linux.go:449] Disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd]
I0515 04:27:25.518727       1 mount_linux.go:459] Disk successfully formatted (mkfs): ext4 - /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:25.518764       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:25.518835       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:25.559574       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:25.559632       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:26.106401       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:27:26.106427       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:27:26.106645       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:27:26.117786       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:27:26.117829       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:26.118188       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:27:26.118210       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:26.216252       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:27:26.216287       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:26.352843       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:26.352897       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:26.384199       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:26.384248       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:27.416802       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:27:27.416834       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:27:27.417041       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:27:27.427903       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:27:27.427942       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:27.428356       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:27:27.428377       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:27.527424       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:27:27.527462       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:27.683758       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:27.683801       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:27.715583       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:27.715631       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:29.737784       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:27:29.737813       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:27:29.738022       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:27:29.749668       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:27:29.749710       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:29.750060       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:27:29.750094       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:29.849280       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:27:29.849312       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:29.991325       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:29.991373       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:30.019965       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:30.020009       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:34.076646       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:27:34.076675       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:27:34.076897       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:27:34.088154       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:27:34.088194       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:34.088554       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:27:34.088576       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:34.192202       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:27:34.192229       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:34.333188       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:34.333236       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:34.371153       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:34.371209       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:42.436953       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:27:42.436980       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:27:42.437182       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:27:42.448369       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:27:42.448489       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:42.449075       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:27:42.449099       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:42.549881       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:27:42.549920       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:42.688695       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:42.688751       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:42.731171       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:42.731218       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:58.766714       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:27:58.766742       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:27:58.767070       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync mfsymlinks file_mode=0777 actimeo=30]) volumeMountGroup()
I0515 04:27:58.785716       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:27:58.785780       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:58.786153       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:27:58.786174       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:27:58.887869       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:27:58.887907       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:27:59.023162       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:27:59.023213       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:27:59.055511       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:27:59.055555       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:28:31.148295       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:28:31.148324       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf","csi.storage.k8s.io/pvc/name":"pvc-bstmx","csi.storage.k8s.io/pvc/namespace":"azurefile-3410","diskname":"pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd","fsType":"ext4","secretnamespace":"azurefile-3410","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410"}
I0515 04:28:31.148568       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410) context(map[csi.storage.k8s.io/pv/name:pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf csi.storage.k8s.io/pvc/name:pvc-bstmx csi.storage.k8s.io/pvc/namespace:azurefile-3410 diskname:pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd fsType:ext4 secretnamespace:azurefile-3410 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync mfsymlinks file_mode=0777 actimeo=30]) volumeMountGroup()
I0515 04:28:31.168388       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount
I0515 04:28:31.168435       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:28:31.168822       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf#pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd##azurefile-3410 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:28:31.168850       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd])
I0515 04:28:31.269198       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd\nTYPE=ext4\n"
I0515 04:28:31.269238       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:28:31.411277       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
I0515 04:28:31.411332       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount)
E0515 04:28:31.445407       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:28:31.445537       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7e5edc51-1759-4df1-b18e-6ce440ee3bcf/proxy-mount/pvcd-7e5edc51-1759-4df1-b18e-6ce440ee3bcf.vhd
I0515 04:29:35.148389       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:29:35.148418       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:29:35.148642       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync mfsymlinks file_mode=0777 actimeo=30]) volumeMountGroup()
I0515 04:29:35.149095       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nostrictsync,mfsymlinks,file_mode=0777,actimeo=30,<masked> //f43e3e2af034745578d174e.file.core.windows.net/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount)
I0515 04:29:35.224330       1 nodeserver.go:319] volume(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) mount //f43e3e2af034745578d174e.file.core.windows.net/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount succeeded
I0515 04:29:35.224752       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:29:35.224781       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:29:35.328779       1 mount_linux.go:490] Output: ""
I0515 04:29:35.328816       1 mount_linux.go:449] Disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd]
I0515 04:29:36.070862       1 mount_linux.go:459] Disk successfully formatted (mkfs): ext4 - /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:36.070897       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:36.070927       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:29:36.109353       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:29:36.109411       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:36.663186       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:29:36.663211       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:29:36.663432       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:29:36.674866       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:29:36.675025       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:36.675480       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:29:36.675504       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:29:36.780461       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:29:36.780487       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:36.938365       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:36.938426       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:29:36.977270       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:29:36.977315       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:38.073864       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:29:38.073895       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:29:38.074132       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:29:38.085836       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:29:38.085883       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:38.086308       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:29:38.086339       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:29:38.192262       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:29:38.192304       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:38.353208       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:38.353260       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:29:38.392109       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:29:38.392381       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:40.492459       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:29:40.492485       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:29:40.492712       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:29:40.503830       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:29:40.503873       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:40.504230       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:29:40.504251       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:29:40.603362       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:29:40.603390       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:40.755475       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:40.755534       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:29:40.785512       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:29:40.785556       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:44.828525       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:29:44.828551       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:29:44.828769       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync actimeo=30 mfsymlinks file_mode=0777]) volumeMountGroup()
I0515 04:29:44.839753       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:29:44.839795       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:44.840182       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:29:44.840205       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:29:44.939784       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:29:44.939818       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:45.092813       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:45.092864       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:29:45.125413       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:29:45.125462       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:53.189932       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:29:53.189964       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:29:53.190204       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:29:53.202003       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:29:53.202048       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:53.202439       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:29:53.202461       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:29:53.301641       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:29:53.301673       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:29:53.455486       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:29:53.455533       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:29:53.481428       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:29:53.481475       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:30:09.519371       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:30:09.519399       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:30:09.519658       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync actimeo=30 mfsymlinks file_mode=0777]) volumeMountGroup()
I0515 04:30:09.538407       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:30:09.538453       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:30:09.538917       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:30:09.538944       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:30:09.636849       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:30:09.636888       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:30:09.786777       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:30:09.786962       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:30:09.813515       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:30:09.813564       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:30:41.876968       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:30:41.876996       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4","csi.storage.k8s.io/pvc/name":"pvc-78zsr","csi.storage.k8s.io/pvc/namespace":"azurefile-8582","diskname":"pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd","fsType":"ext4","secretnamespace":"azurefile-8582","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582"}
I0515 04:30:41.877204       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount) fstype(ext4) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582) context(map[csi.storage.k8s.io/pv/name:pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4 csi.storage.k8s.io/pvc/name:pvc-78zsr csi.storage.k8s.io/pvc/namespace:azurefile-8582 diskname:pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd fsType:ext4 secretnamespace:azurefile-8582 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:30:41.896064       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount
I0515 04:30:41.896114       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:30:41.896486       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4#pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd##azurefile-8582 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0515 04:30:41.896510       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd])
I0515 04:30:41.993602       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd\nTYPE=ext4\n"
I0515 04:30:41.993643       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:30:42.144950       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
I0515 04:30:42.144996       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount)
E0515 04:30:42.187642       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

E0515 04:30:42.187691       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9cf78b2-2073-4475-8b1c-771aeedd59f4/proxy-mount/pvcd-e9cf78b2-2073-4475-8b1c-771aeedd59f4.vhd
I0515 04:31:45.480133       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:31:45.480164       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c","csi.storage.k8s.io/pvc/name":"pvc-gn2vf","csi.storage.k8s.io/pvc/namespace":"azurefile-7726","diskname":"pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd","fsType":"xfs","secretnamespace":"azurefile-7726","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f43e3e2af034745578d174e#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd##azurefile-7726"}
I0515 04:31:45.480396       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c/proxy-mount) fstype(xfs) volumeID(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd##azurefile-7726) context(map[csi.storage.k8s.io/pv/name:pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c csi.storage.k8s.io/pvc/name:pvc-gn2vf csi.storage.k8s.io/pvc/namespace:azurefile-7726 diskname:pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd fsType:xfs secretnamespace:azurefile-7726 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0515 04:31:45.480918       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nostrictsync,file_mode=0777,actimeo=30,mfsymlinks,<masked> //f43e3e2af034745578d174e.file.core.windows.net/pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c/proxy-mount)
I0515 04:31:45.557672       1 nodeserver.go:319] volume(capz-gxfhvh#f43e3e2af034745578d174e#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd##azurefile-7726) mount //f43e3e2af034745578d174e.file.core.windows.net/pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c/proxy-mount succeeded
I0515 04:31:45.558100       1 nodeserver.go:339] NodeStageVolume: volume capz-gxfhvh#f43e3e2af034745578d174e#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c#pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd##azurefile-7726 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b4b4deb0-1455-4545-b9f6-56d66476d65c/proxy-mount/pvcd-b4b4deb0-1455-4545-b9f6-56d66476d65c.vhd with mount options([loop])
... skipping 412 lines ...
I0515 04:35:41.211916       1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01f98ea4-aa67-4de0-adeb-49cdb3d03654/globalmount /var/lib/kubelet/pods/faaeb77d-8774-4852-9222-2408d0358714/volumes/kubernetes.io~csi/pvc-01f98ea4-aa67-4de0-adeb-49cdb3d03654/mount)
I0515 04:35:41.217650       1 nodeserver.go:116] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01f98ea4-aa67-4de0-adeb-49cdb3d03654/globalmount at /var/lib/kubelet/pods/faaeb77d-8774-4852-9222-2408d0358714/volumes/kubernetes.io~csi/pvc-01f98ea4-aa67-4de0-adeb-49cdb3d03654/mount successfully
I0515 04:35:41.217673       1 utils.go:83] GRPC response: {}
I0515 04:36:17.910051       1 utils.go:76] GRPC call: /csi.v1.Node/NodePublishVolume
I0515 04:36:17.910082       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/25829ad8-f1cc-4650-99a7-d6fe010eae96/volumes/kubernetes.io~csi/test-volume-1/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"azurefile-volume-tester-88lw9","csi.storage.k8s.io/pod.namespace":"azurefile-4162","csi.storage.k8s.io/pod.uid":"25829ad8-f1cc-4650-99a7-d6fe010eae96","csi.storage.k8s.io/serviceAccount.name":"default","mountOptions":"cache=singleclient","secretName":"azure-storage-account-fc0766d5d43dd4201a264f7-secret","server":"","shareName":"csi-inline-smb-volume"},"volume_id":"csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f"}
I0515 04:36:17.910239       1 nodeserver.go:68] NodePublishVolume: ephemeral volume(csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f) mount on /var/lib/kubelet/pods/25829ad8-f1cc-4650-99a7-d6fe010eae96/volumes/kubernetes.io~csi/test-volume-1/mount, VolumeContext: map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-88lw9 csi.storage.k8s.io/pod.namespace:azurefile-4162 csi.storage.k8s.io/pod.uid:25829ad8-f1cc-4650-99a7-d6fe010eae96 csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:azure-storage-account-fc0766d5d43dd4201a264f7-secret secretnamespace:azurefile-4162 server: shareName:csi-inline-smb-volume storageaccount:]
W0515 04:36:17.910275       1 azurefile.go:564] parsing volumeID(csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f) return with error: error parsing volume id: "csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f", should at least contain two #
I0515 04:36:17.915925       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/pods/25829ad8-f1cc-4650-99a7-d6fe010eae96/volumes/kubernetes.io~csi/test-volume-1/mount) fstype() volumeID(csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f) context(map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-88lw9 csi.storage.k8s.io/pod.namespace:azurefile-4162 csi.storage.k8s.io/pod.uid:25829ad8-f1cc-4650-99a7-d6fe010eae96 csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:azure-storage-account-fc0766d5d43dd4201a264f7-secret secretnamespace:azurefile-4162 server: shareName:csi-inline-smb-volume storageaccount:]) mountflags([]) mountOptions([actimeo=30 cache=singleclient dir_mode=0777 file_mode=0777 mfsymlinks]) volumeMountGroup()
I0515 04:36:17.916466       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o actimeo=30,cache=singleclient,dir_mode=0777,file_mode=0777,mfsymlinks,<masked> //fc0766d5d43dd4201a264f7.file.core.windows.net/csi-inline-smb-volume /var/lib/kubelet/pods/25829ad8-f1cc-4650-99a7-d6fe010eae96/volumes/kubernetes.io~csi/test-volume-1/mount)
I0515 04:36:18.034671       1 nodeserver.go:319] volume(csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f) mount //fc0766d5d43dd4201a264f7.file.core.windows.net/csi-inline-smb-volume on /var/lib/kubelet/pods/25829ad8-f1cc-4650-99a7-d6fe010eae96/volumes/kubernetes.io~csi/test-volume-1/mount succeeded
I0515 04:36:18.034712       1 utils.go:83] GRPC response: {}
I0515 04:36:20.931428       1 utils.go:76] GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0515 04:36:20.931458       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/25829ad8-f1cc-4650-99a7-d6fe010eae96/volumes/kubernetes.io~csi/test-volume-1/mount","volume_id":"csi-8089fb8a410779d7fe3cd68a5c039c57978e788325072077dba5eef2c604f73f"}
... skipping 20 lines ...
I0515 04:36:45.022574       1 utils.go:83] GRPC response: {}
I0515 04:37:26.046949       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:37:26.046978       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bb785061-8062-4eb8-8335-c701b4e6d575","csi.storage.k8s.io/pvc/name":"pvc-v2dwr","csi.storage.k8s.io/pvc/namespace":"azurefile-9103","mountPermissions":"0755","protocol":"nfs","rootSquashType":"RootSquash","secretnamespace":"azurefile-9103","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f0fbca06a82134096a7597a#pvcn-bb785061-8062-4eb8-8335-c701b4e6d575###azurefile-9103"}
I0515 04:37:26.047169       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount) fstype() volumeID(capz-gxfhvh#f0fbca06a82134096a7597a#pvcn-bb785061-8062-4eb8-8335-c701b4e6d575###azurefile-9103) context(map[csi.storage.k8s.io/pv/name:pvc-bb785061-8062-4eb8-8335-c701b4e6d575 csi.storage.k8s.io/pvc/name:pvc-v2dwr csi.storage.k8s.io/pvc/namespace:azurefile-9103 mountPermissions:0755 protocol:nfs rootSquashType:RootSquash secretnamespace:azurefile-9103 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0515 04:37:26.047680       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 f0fbca06a82134096a7597a.file.core.windows.net:/f0fbca06a82134096a7597a/pvcn-bb785061-8062-4eb8-8335-c701b4e6d575 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount)
I0515 04:37:26.571849       1 utils.go:220] chmod targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount, mode:020000000777) with permissions(0755)
E0515 04:37:26.574382       1 utils.go:81] GRPC error: rpc error: code = Internal desc = chmod /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount: operation not permitted
I0515 04:37:27.152222       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0515 04:37:27.152249       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bb785061-8062-4eb8-8335-c701b4e6d575","csi.storage.k8s.io/pvc/name":"pvc-v2dwr","csi.storage.k8s.io/pvc/namespace":"azurefile-9103","mountPermissions":"0755","protocol":"nfs","rootSquashType":"RootSquash","secretnamespace":"azurefile-9103","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652588318618-8081-file.csi.azure.com"},"volume_id":"capz-gxfhvh#f0fbca06a82134096a7597a#pvcn-bb785061-8062-4eb8-8335-c701b4e6d575###azurefile-9103"}
I0515 04:37:27.152453       1 nodeserver.go:289] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount) fstype() volumeID(capz-gxfhvh#f0fbca06a82134096a7597a#pvcn-bb785061-8062-4eb8-8335-c701b4e6d575###azurefile-9103) context(map[csi.storage.k8s.io/pv/name:pvc-bb785061-8062-4eb8-8335-c701b4e6d575 csi.storage.k8s.io/pvc/name:pvc-v2dwr csi.storage.k8s.io/pvc/namespace:azurefile-9103 mountPermissions:0755 protocol:nfs rootSquashType:RootSquash secretnamespace:azurefile-9103 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652588318618-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0515 04:37:27.160230       1 nodeserver.go:512] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount
I0515 04:37:27.160272       1 nodeserver.go:296] NodeStageVolume: volume capz-gxfhvh#f0fbca06a82134096a7597a#pvcn-bb785061-8062-4eb8-8335-c701b4e6d575###azurefile-9103 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bb785061-8062-4eb8-8335-c701b4e6d575/globalmount
I0515 04:37:27.160289       1 utils.go:83] GRPC response: {}
... skipping 459 lines ...
2022/05/15 04:41:14 ===================================================
STEP: GetAccountNumByResourceGroup(capz-gxfhvh) returns 8 accounts

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 31 of 34 Specs in 1946.290 seconds
SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 3 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 35 lines ...
No journal files were found.
No journal files were found.
No journal files were found.
No journal files were found.
./scripts/../hack/log/log-dump.sh: line 93: TEST_WINDOWS: unbound variable
daemonset.apps "log-dump-node" deleted
Error from server (NotFound): error when deleting "./scripts/../hack/log/../../hack/log/log-dump-daemonset-windows.yaml": daemonsets.apps "log-dump-node-windows" not found
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-gxfhvh" deleted
kind delete cluster --name=capz || true
Deleting cluster "capz" ...
kind delete cluster --name=capz-e2e || true
... skipping 12 lines ...