This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: fix: bypass chmod if mounting point permissions are correct
ResultNot Finished
Started2022-05-13 13:30
Revision
Refs 1019

Build Still Running!


Show 31 Passed Tests

Show 3 Skipped Tests

Error lines from build-log.txt

... skipping 672 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.2/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 132 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-fyjq3n-kubeconfig; do sleep 1; done"
capz-fyjq3n-kubeconfig                 cluster.x-k8s.io/secret               1      0s
# Get kubeconfig and store it locally.
kubectl get secrets capz-fyjq3n-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-fyjq3n-control-plane-lcd48   NotReady   control-plane,master   2s    v1.22.1
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-fyjq3n-control-plane-lcd48 condition met
node/capz-fyjq3n-md-0-9trgq condition met
... skipping 35 lines ...

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 11156  100 11156    0     0   222k      0 --:--:-- --:--:-- --:--:--  222k
Downloading https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
docker pull capzci.azurecr.io/azurefile-csi:e2e-fa940354ff091c41e803444a996ce8f5dad21646 || make container-all push-manifest
Error response from daemon: manifest for capzci.azurecr.io/azurefile-csi:e2e-fa940354ff091c41e803444a996ce8f5dad21646 not found: manifest unknown: manifest tagged by "e2e-fa940354ff091c41e803444a996ce8f5dad21646" is not found
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver'
CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-fa940354ff091c41e803444a996ce8f5dad21646 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=fa940354ff091c41e803444a996ce8f5dad21646 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2022-05-13T13:48:41Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/azurefileplugin.exe ./pkg/azurefileplugin
docker buildx rm container-builder || true
error: no builder "container-builder" found
docker buildx create --use --name=container-builder
container-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
... skipping 1801 lines ...
                    type: string
                type: object
                oneOf:
                - required: ["persistentVolumeClaimName"]
                - required: ["volumeSnapshotContentName"]
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 60 lines ...
                    type: string
                  volumeSnapshotContentName:
                    description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.
                    type: string
                type: object
              volumeSnapshotClassName:
                description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
                type: string
            required:
            - source
            type: object
          status:
            description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
... skipping 2 lines ...
                description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
                type: string
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
                format: date-time
                type: string
              error:
                description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                type: string
                description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                x-kubernetes-int-or-string: true
            type: object
        required:
        - spec
        type: object
... skipping 254 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 108 lines ...
            description: status represents the current information of a snapshot.
            properties:
              creationTime:
                description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
                format: int64
                type: integer
              error:
                description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
                properties:
                  message:
                    description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
                    type: string
                  time:
                    description: time is the timestamp when the error was encountered.
                    format: date-time
                    type: string
                type: object
              readyToUse:
                description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
                type: boolean
              restoreSize:
                description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
                format: int64
                minimum: 0
                type: integer
              snapshotHandle:
                description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
                type: string
            type: object
        required:
        - spec
        type: object
    served: true
... skipping 938 lines ...
          image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0"
          args:
            - "-csi-address=$(ADDRESS)"
            - "-v=2"
            - "-leader-election"
            - "--leader-election-namespace=kube-system"
            - '-handle-volume-inuse-error=false'
            - '-timeout=120s'
            - '-feature-gates=RecoverVolumeExpansionFailure=true'
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          imagePullPolicy: IfNotPresent
... skipping 209 lines ...
Git Commit: N/A
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
STEP: Building a namespace api object, basename azurefile
W0513 14:00:51.972759   37918 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret /: failed to get secret /: resource name may not be empty
I0513 14:00:51.974284   37918 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I0513 14:00:51.974313   37918 driver.go:93] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME
I0513 14:00:51.974330   37918 driver.go:93] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0513 14:00:51.974337   37918 driver.go:93] Enabling controller service capability: EXPAND_VOLUME
I0513 14:00:51.974342   37918 driver.go:93] Enabling controller service capability: SINGLE_NODE_MULTI_WRITER
I0513 14:00:51.974351   37918 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
... skipping 120 lines ...
May 13 14:03:00.089: INFO: PersistentVolumeClaim pvc-b84mm found but phase is Pending instead of Bound.
May 13 14:03:02.193: INFO: PersistentVolumeClaim pvc-b84mm found and phase=Bound (1m39.051100972s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:03:02.518: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-46v8g" in namespace "azurefile-2540" to be "Succeeded or Failed"
May 13 14:03:02.624: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 106.678713ms
May 13 14:03:04.731: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212913649s
May 13 14:03:06.836: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318172286s
May 13 14:03:08.941: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423144253s
May 13 14:03:11.047: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529649893s
May 13 14:03:13.152: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.633980193s
May 13 14:03:15.256: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.73818376s
May 13 14:03:17.362: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 14.844391823s
May 13 14:03:19.472: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 16.954633576s
May 13 14:03:21.584: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Pending", Reason="", readiness=false. Elapsed: 19.065786762s
May 13 14:03:23.694: INFO: Pod "azurefile-volume-tester-46v8g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.176499549s
STEP: Saw pod success
May 13 14:03:23.694: INFO: Pod "azurefile-volume-tester-46v8g" satisfied condition "Succeeded or Failed"
May 13 14:03:23.694: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-46v8g"
May 13 14:03:23.818: INFO: Pod azurefile-volume-tester-46v8g has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-46v8g in namespace azurefile-2540
May 13 14:03:23.950: INFO: deleting PVC "azurefile-2540"/"pvc-b84mm"
May 13 14:03:23.950: INFO: Deleting PersistentVolumeClaim "pvc-b84mm"
... skipping 43 lines ...
May 13 14:03:45.589: INFO: PersistentVolumeClaim pvc-ch6b6 found but phase is Pending instead of Bound.
May 13 14:03:47.694: INFO: PersistentVolumeClaim pvc-ch6b6 found and phase=Bound (21.161853566s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:03:48.007: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-xgb8t" in namespace "azurefile-4728" to be "Succeeded or Failed"
May 13 14:03:48.111: INFO: Pod "azurefile-volume-tester-xgb8t": Phase="Pending", Reason="", readiness=false. Elapsed: 104.048833ms
May 13 14:03:50.221: INFO: Pod "azurefile-volume-tester-xgb8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214279819s
STEP: Saw pod success
May 13 14:03:50.221: INFO: Pod "azurefile-volume-tester-xgb8t" satisfied condition "Succeeded or Failed"
May 13 14:03:50.221: INFO: deleting Pod "azurefile-4728"/"azurefile-volume-tester-xgb8t"
May 13 14:03:50.329: INFO: Pod azurefile-volume-tester-xgb8t has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-xgb8t in namespace azurefile-4728
May 13 14:03:50.446: INFO: deleting PVC "azurefile-4728"/"pvc-ch6b6"
May 13 14:03:50.446: INFO: Deleting PersistentVolumeClaim "pvc-ch6b6"
... skipping 126 lines ...
May 13 14:05:31.769: INFO: PersistentVolumeClaim pvc-jr7hj found but phase is Pending instead of Bound.
May 13 14:05:33.875: INFO: PersistentVolumeClaim pvc-jr7hj found and phase=Bound (21.164507686s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 13 14:05:34.199: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-vcw75" in namespace "azurefile-2790" to be "Error status code"
May 13 14:05:34.304: INFO: Pod "azurefile-volume-tester-vcw75": Phase="Pending", Reason="", readiness=false. Elapsed: 105.595831ms
May 13 14:05:36.416: INFO: Pod "azurefile-volume-tester-vcw75": Phase="Failed", Reason="", readiness=false. Elapsed: 2.216862528s
STEP: Saw pod failure
May 13 14:05:36.416: INFO: Pod "azurefile-volume-tester-vcw75" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 13 14:05:36.522: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-vcw75"
May 13 14:05:36.632: INFO: Pod azurefile-volume-tester-vcw75 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-vcw75 in namespace azurefile-2790
May 13 14:05:36.752: INFO: deleting PVC "azurefile-2790"/"pvc-jr7hj"
... skipping 196 lines ...
May 13 14:07:32.150: INFO: PersistentVolumeClaim pvc-hbcrr found but phase is Pending instead of Bound.
May 13 14:07:34.256: INFO: PersistentVolumeClaim pvc-hbcrr found and phase=Bound (2.209939786s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:07:34.572: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-7hl7c" in namespace "azurefile-4538" to be "Succeeded or Failed"
May 13 14:07:34.676: INFO: Pod "azurefile-volume-tester-7hl7c": Phase="Pending", Reason="", readiness=false. Elapsed: 103.907969ms
May 13 14:07:36.787: INFO: Pod "azurefile-volume-tester-7hl7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.215164185s
STEP: Saw pod success
May 13 14:07:36.787: INFO: Pod "azurefile-volume-tester-7hl7c" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
May 13 14:08:07.521: INFO: deleting Pod "azurefile-4538"/"azurefile-volume-tester-7hl7c"
... skipping 39 lines ...
May 13 14:08:10.423: INFO: PersistentVolumeClaim pvc-tcsgg found but phase is Pending instead of Bound.
May 13 14:08:12.528: INFO: PersistentVolumeClaim pvc-tcsgg found and phase=Bound (2.208832507s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:08:12.843: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-8l7n4" in namespace "azurefile-8266" to be "Succeeded or Failed"
May 13 14:08:12.947: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Pending", Reason="", readiness=false. Elapsed: 104.432419ms
May 13 14:08:15.053: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209911618s
May 13 14:08:17.159: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316289219s
May 13 14:08:19.264: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42131873s
May 13 14:08:21.370: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.526742684s
May 13 14:08:23.476: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.633516769s
May 13 14:08:25.588: INFO: Pod "azurefile-volume-tester-8l7n4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.745213856s
STEP: Saw pod success
May 13 14:08:25.588: INFO: Pod "azurefile-volume-tester-8l7n4" satisfied condition "Succeeded or Failed"
May 13 14:08:25.588: INFO: deleting Pod "azurefile-8266"/"azurefile-volume-tester-8l7n4"
May 13 14:08:26.019: INFO: Pod azurefile-volume-tester-8l7n4 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-8l7n4 in namespace azurefile-8266
May 13 14:08:26.138: INFO: deleting PVC "azurefile-8266"/"pvc-tcsgg"
May 13 14:08:26.138: INFO: Deleting PersistentVolumeClaim "pvc-tcsgg"
... skipping 36 lines ...
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
May 13 14:08:33.463: INFO: deleting Pod "azurefile-4376"/"azurefile-volume-tester-ndlgq"
May 13 14:08:33.569: INFO: Error getting logs for pod azurefile-volume-tester-ndlgq: the server rejected our request for an unknown reason (get pods azurefile-volume-tester-ndlgq)
STEP: Deleting pod azurefile-volume-tester-ndlgq in namespace azurefile-4376
May 13 14:08:33.675: INFO: deleting PVC "azurefile-4376"/"pvc-57kfg"
May 13 14:08:33.675: INFO: Deleting PersistentVolumeClaim "pvc-57kfg"
STEP: waiting for claim's PV "pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d" to be deleted
May 13 14:08:33.999: INFO: Waiting up to 10m0s for PersistentVolume pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d to get deleted
May 13 14:08:34.105: INFO: PersistentVolume pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d found and phase=Bound (105.716347ms)
... skipping 57 lines ...
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
May 13 14:10:44.011: INFO: deleting Pod "azurefile-7996"/"azurefile-volume-tester-5h4lc"
May 13 14:10:44.128: INFO: Error getting logs for pod azurefile-volume-tester-5h4lc: the server rejected our request for an unknown reason (get pods azurefile-volume-tester-5h4lc)
STEP: Deleting pod azurefile-volume-tester-5h4lc in namespace azurefile-7996
May 13 14:10:44.233: INFO: deleting PVC "azurefile-7996"/"pvc-lh457"
May 13 14:10:44.233: INFO: Deleting PersistentVolumeClaim "pvc-lh457"
STEP: waiting for claim's PV "pvc-c725d72e-105b-4258-bd6c-bd7bda73a905" to be deleted
May 13 14:10:44.549: INFO: Waiting up to 10m0s for PersistentVolume pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 to get deleted
May 13 14:10:44.651: INFO: PersistentVolume pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 found and phase=Bound (102.922381ms)
... skipping 138 lines ...
May 13 14:14:13.177: INFO: PersistentVolumeClaim pvc-w9d78 found but phase is Pending instead of Bound.
May 13 14:14:15.282: INFO: PersistentVolumeClaim pvc-w9d78 found and phase=Bound (2.209337724s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 13 14:14:15.596: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-hgxtk" in namespace "azurefile-2546" to be "Error status code"
May 13 14:14:15.699: INFO: Pod "azurefile-volume-tester-hgxtk": Phase="Pending", Reason="", readiness=false. Elapsed: 102.991885ms
May 13 14:14:17.803: INFO: Pod "azurefile-volume-tester-hgxtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206902857s
May 13 14:14:19.915: INFO: Pod "azurefile-volume-tester-hgxtk": Phase="Failed", Reason="", readiness=false. Elapsed: 4.318878224s
STEP: Saw pod failure
May 13 14:14:19.915: INFO: Pod "azurefile-volume-tester-hgxtk" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 13 14:14:20.022: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-hgxtk"
May 13 14:14:20.127: INFO: Pod azurefile-volume-tester-hgxtk has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-hgxtk in namespace azurefile-2546
May 13 14:14:20.247: INFO: deleting PVC "azurefile-2546"/"pvc-w9d78"
... skipping 196 lines ...
May 13 14:14:48.640: INFO: PersistentVolumeClaim pvc-xskn4 found but phase is Pending instead of Bound.
May 13 14:14:50.745: INFO: PersistentVolumeClaim pvc-xskn4 found and phase=Bound (2.221100066s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:14:51.057: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-rcvhl" in namespace "azurefile-7726" to be "Succeeded or Failed"
May 13 14:14:51.165: INFO: Pod "azurefile-volume-tester-rcvhl": Phase="Pending", Reason="", readiness=false. Elapsed: 108.246636ms
May 13 14:14:53.274: INFO: Pod "azurefile-volume-tester-rcvhl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217576126s
May 13 14:14:55.385: INFO: Pod "azurefile-volume-tester-rcvhl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327967072s
STEP: Saw pod success
May 13 14:14:55.385: INFO: Pod "azurefile-volume-tester-rcvhl" satisfied condition "Succeeded or Failed"
May 13 14:14:55.385: INFO: deleting Pod "azurefile-7726"/"azurefile-volume-tester-rcvhl"
May 13 14:14:55.490: INFO: Pod azurefile-volume-tester-rcvhl has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-rcvhl in namespace azurefile-7726
May 13 14:14:55.612: INFO: deleting PVC "azurefile-7726"/"pvc-xskn4"
May 13 14:14:55.612: INFO: Deleting PersistentVolumeClaim "pvc-xskn4"
... skipping 74 lines ...
May 13 14:15:01.363: INFO: PersistentVolumeClaim pvc-m9jmd found but phase is Pending instead of Bound.
May 13 14:15:03.471: INFO: PersistentVolumeClaim pvc-m9jmd found and phase=Bound (2.211954739s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
May 13 14:15:03.787: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-w976w" in namespace "azurefile-3086" to be "Succeeded or Failed"
May 13 14:15:03.890: INFO: Pod "azurefile-volume-tester-w976w": Phase="Pending", Reason="", readiness=false. Elapsed: 103.148541ms
May 13 14:15:06.000: INFO: Pod "azurefile-volume-tester-w976w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21329812s
STEP: Saw pod success
May 13 14:15:06.000: INFO: Pod "azurefile-volume-tester-w976w" satisfied condition "Succeeded or Failed"
STEP: creating volume snapshot class
STEP: setting up the VolumeSnapshotClass
STEP: creating a VolumeSnapshotClass
STEP: taking snapshots
STEP: creating a VolumeSnapshot for pvc-m9jmd
STEP: waiting for VolumeSnapshot to be ready to use - volume-snapshot-27wsm
... skipping 32 lines ...
check the driver pods if restarts ...
======================================================================================
2022/05/13 14:15:23 Check successfully
May 13 14:15:23.991: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
2022/05/13 14:15:23 run script: test/utils/get_storage_account_secret_name.sh
2022/05/13 14:15:24 got output: azure-storage-account-f15605429459a41f7b2569a-secret
, error: <nil>
2022/05/13 14:15:24 got storage account secret name: azure-storage-account-f15605429459a41f7b2569a-secret
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 13 14:15:24.632: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5czf5] to have phase Bound
May 13 14:15:24.736: INFO: PersistentVolumeClaim pvc-5czf5 found but phase is Pending instead of Bound.
May 13 14:15:26.840: INFO: PersistentVolumeClaim pvc-5czf5 found and phase=Bound (2.207400659s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:15:27.153: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-rsgxh" in namespace "azurefile-1387" to be "Succeeded or Failed"
May 13 14:15:27.256: INFO: Pod "azurefile-volume-tester-rsgxh": Phase="Pending", Reason="", readiness=false. Elapsed: 102.807872ms
May 13 14:15:29.367: INFO: Pod "azurefile-volume-tester-rsgxh": Phase="Running", Reason="", readiness=true. Elapsed: 2.213921525s
May 13 14:15:31.480: INFO: Pod "azurefile-volume-tester-rsgxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.326660843s
STEP: Saw pod success
May 13 14:15:31.480: INFO: Pod "azurefile-volume-tester-rsgxh" satisfied condition "Succeeded or Failed"
May 13 14:15:31.480: INFO: deleting Pod "azurefile-1387"/"azurefile-volume-tester-rsgxh"
May 13 14:15:31.585: INFO: Pod azurefile-volume-tester-rsgxh has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-rsgxh in namespace azurefile-1387
May 13 14:15:31.703: INFO: deleting PVC "azurefile-1387"/"pvc-5czf5"
May 13 14:15:31.703: INFO: Deleting PersistentVolumeClaim "pvc-5czf5"
... skipping 43 lines ...
May 13 14:15:53.335: INFO: PersistentVolumeClaim pvc-rvfng found but phase is Pending instead of Bound.
May 13 14:15:55.439: INFO: PersistentVolumeClaim pvc-rvfng found and phase=Bound (21.152360935s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:15:55.753: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-c56pq" in namespace "azurefile-4547" to be "Succeeded or Failed"
May 13 14:15:55.857: INFO: Pod "azurefile-volume-tester-c56pq": Phase="Pending", Reason="", readiness=false. Elapsed: 103.5238ms
May 13 14:15:57.968: INFO: Pod "azurefile-volume-tester-c56pq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21472582s
STEP: Saw pod success
May 13 14:15:57.968: INFO: Pod "azurefile-volume-tester-c56pq" satisfied condition "Succeeded or Failed"
May 13 14:15:57.968: INFO: deleting Pod "azurefile-4547"/"azurefile-volume-tester-c56pq"
May 13 14:15:58.080: INFO: Pod azurefile-volume-tester-c56pq has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-c56pq in namespace azurefile-4547
May 13 14:15:58.231: INFO: deleting PVC "azurefile-4547"/"pvc-rvfng"
May 13 14:15:58.231: INFO: Deleting PersistentVolumeClaim "pvc-rvfng"
... skipping 69 lines ...
check the driver pods if restarts ...
======================================================================================
2022/05/13 14:17:15 Check successfully
May 13 14:17:15.829: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
2022/05/13 14:17:15 run script: test/utils/get_storage_account_secret_name.sh
2022/05/13 14:17:16 got output: azure-storage-account-f15605429459a41f7b2569a-secret
, error: <nil>
2022/05/13 14:17:16 got storage account secret name: azure-storage-account-f15605429459a41f7b2569a-secret
STEP: Successfully provisioned AzureFile volume: "capz-fyjq3n#f15605429459a41f7b2569a#csi-inline-smb-volume##csi-inline-smb-volume#azurefile-4801"

STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:17:18.123: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-2pvsb" in namespace "azurefile-4801" to be "Succeeded or Failed"
May 13 14:17:18.226: INFO: Pod "azurefile-volume-tester-2pvsb": Phase="Pending", Reason="", readiness=false. Elapsed: 103.139007ms
May 13 14:17:20.338: INFO: Pod "azurefile-volume-tester-2pvsb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214986656s
STEP: Saw pod success
May 13 14:17:20.338: INFO: Pod "azurefile-volume-tester-2pvsb" satisfied condition "Succeeded or Failed"
May 13 14:17:20.338: INFO: deleting Pod "azurefile-4801"/"azurefile-volume-tester-2pvsb"
May 13 14:17:20.446: INFO: Pod azurefile-volume-tester-2pvsb has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-2pvsb in namespace azurefile-4801
May 13 14:17:20.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azurefile-4801" for this suite.
... skipping 42 lines ...
check the driver pods if restarts ...
======================================================================================
2022/05/13 14:17:23 Check successfully
May 13 14:17:23.861: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: creating secret smbcreds in namespace azurefile-1166
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:17:24.076: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-pgwmk" in namespace "azurefile-1166" to be "Succeeded or Failed"
May 13 14:17:24.179: INFO: Pod "azurefile-volume-tester-pgwmk": Phase="Pending", Reason="", readiness=false. Elapsed: 103.168647ms
May 13 14:17:26.289: INFO: Pod "azurefile-volume-tester-pgwmk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212592014s
STEP: Saw pod success
May 13 14:17:26.289: INFO: Pod "azurefile-volume-tester-pgwmk" satisfied condition "Succeeded or Failed"
May 13 14:17:26.289: INFO: deleting Pod "azurefile-1166"/"azurefile-volume-tester-pgwmk"
May 13 14:17:26.407: INFO: Pod azurefile-volume-tester-pgwmk has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-pgwmk in namespace azurefile-1166
May 13 14:17:26.523: INFO: deleting Secret smbcreds
May 13 14:17:26.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
May 13 14:18:21.268: INFO: PersistentVolumeClaim pvc-cttsh found but phase is Pending instead of Bound.
May 13 14:18:23.372: INFO: PersistentVolumeClaim pvc-cttsh found and phase=Bound (54.827132512s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:18:23.688: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-4pk8j" in namespace "azurefile-4415" to be "Succeeded or Failed"
May 13 14:18:23.790: INFO: Pod "azurefile-volume-tester-4pk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 102.632442ms
May 13 14:18:25.901: INFO: Pod "azurefile-volume-tester-4pk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213101503s
May 13 14:18:28.010: INFO: Pod "azurefile-volume-tester-4pk8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.322494323s
STEP: Saw pod success
May 13 14:18:28.010: INFO: Pod "azurefile-volume-tester-4pk8j" satisfied condition "Succeeded or Failed"
May 13 14:18:28.010: INFO: deleting Pod "azurefile-4415"/"azurefile-volume-tester-4pk8j"
May 13 14:18:28.118: INFO: Pod azurefile-volume-tester-4pk8j has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-4pk8j in namespace azurefile-4415
May 13 14:18:28.232: INFO: deleting PVC "azurefile-4415"/"pvc-cttsh"
May 13 14:18:28.232: INFO: Deleting PersistentVolumeClaim "pvc-cttsh"
... skipping 78 lines ...
May 13 14:20:03.530: INFO: PersistentVolumeClaim pvc-fkqr9 found but phase is Pending instead of Bound.
May 13 14:20:05.635: INFO: PersistentVolumeClaim pvc-fkqr9 found and phase=Bound (1m34.84395569s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:20:05.952: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-h2npt" in namespace "azurefile-6720" to be "Succeeded or Failed"
May 13 14:20:06.054: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 102.631982ms
May 13 14:20:08.160: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207930165s
May 13 14:20:10.264: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312695275s
May 13 14:20:12.368: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416652223s
May 13 14:20:14.475: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522722381s
May 13 14:20:16.578: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.626353893s
May 13 14:20:18.684: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.732249773s
May 13 14:20:20.788: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.835820593s
May 13 14:20:22.897: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.944989929s
May 13 14:20:25.011: INFO: Pod "azurefile-volume-tester-h2npt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.059558658s
STEP: Saw pod success
May 13 14:20:25.011: INFO: Pod "azurefile-volume-tester-h2npt" satisfied condition "Succeeded or Failed"
May 13 14:20:25.011: INFO: deleting Pod "azurefile-6720"/"azurefile-volume-tester-h2npt"
May 13 14:20:25.193: INFO: Pod azurefile-volume-tester-h2npt has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-h2npt in namespace azurefile-6720
May 13 14:20:25.320: INFO: deleting PVC "azurefile-6720"/"pvc-fkqr9"
May 13 14:20:25.320: INFO: Deleting PersistentVolumeClaim "pvc-fkqr9"
... skipping 98 lines ...
May 13 14:21:00.091: INFO: PersistentVolumeClaim pvc-kjbfx found but phase is Pending instead of Bound.
May 13 14:21:02.195: INFO: PersistentVolumeClaim pvc-kjbfx found and phase=Bound (2.208190612s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:02.508: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-k2fsf" in namespace "azurefile-4162" to be "Succeeded or Failed"
May 13 14:21:02.618: INFO: Pod "azurefile-volume-tester-k2fsf": Phase="Pending", Reason="", readiness=false. Elapsed: 110.2799ms
May 13 14:21:04.727: INFO: Pod "azurefile-volume-tester-k2fsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219362828s
May 13 14:21:06.837: INFO: Pod "azurefile-volume-tester-k2fsf": Phase="Running", Reason="", readiness=true. Elapsed: 4.329535952s
May 13 14:21:08.947: INFO: Pod "azurefile-volume-tester-k2fsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.438765134s
STEP: Saw pod success
May 13 14:21:08.947: INFO: Pod "azurefile-volume-tester-k2fsf" satisfied condition "Succeeded or Failed"
May 13 14:21:08.947: INFO: deleting Pod "azurefile-4162"/"azurefile-volume-tester-k2fsf"
May 13 14:21:09.060: INFO: Pod azurefile-volume-tester-k2fsf has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-k2fsf in namespace azurefile-4162
May 13 14:21:09.177: INFO: deleting PVC "azurefile-4162"/"pvc-kjbfx"
May 13 14:21:09.177: INFO: Deleting PersistentVolumeClaim "pvc-kjbfx"
... skipping 101 lines ...
May 13 14:21:19.148: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6ggnq] to have phase Bound
May 13 14:21:19.251: INFO: PersistentVolumeClaim pvc-6ggnq found and phase=Bound (102.873282ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
May 13 14:21:19.563: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-nvtd5" in namespace "azurefile-5320" to be "Error status code"
May 13 14:21:19.666: INFO: Pod "azurefile-volume-tester-nvtd5": Phase="Pending", Reason="", readiness=false. Elapsed: 102.96544ms
May 13 14:21:21.776: INFO: Pod "azurefile-volume-tester-nvtd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213461986s
May 13 14:21:23.887: INFO: Pod "azurefile-volume-tester-nvtd5": Phase="Failed", Reason="", readiness=false. Elapsed: 4.324156537s
STEP: Saw pod failure
May 13 14:21:23.887: INFO: Pod "azurefile-volume-tester-nvtd5" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
May 13 14:21:23.994: INFO: deleting Pod "azurefile-5320"/"azurefile-volume-tester-nvtd5"
May 13 14:21:24.111: INFO: Pod azurefile-volume-tester-nvtd5 has the following logs: /bin/sh: can't create /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-nvtd5 in namespace azurefile-5320
May 13 14:21:24.230: INFO: deleting PVC "azurefile-5320"/"pvc-6ggnq"
... skipping 37 lines ...
May 13 14:21:28.825: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6x8nd] to have phase Bound
May 13 14:21:28.928: INFO: PersistentVolumeClaim pvc-6x8nd found and phase=Bound (103.632806ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:29.242: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-l9t6b" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 13 14:21:29.345: INFO: Pod "azurefile-volume-tester-l9t6b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.752619ms
May 13 14:21:31.455: INFO: Pod "azurefile-volume-tester-l9t6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213706377s
May 13 14:21:33.566: INFO: Pod "azurefile-volume-tester-l9t6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.324393049s
STEP: Saw pod success
May 13 14:21:33.566: INFO: Pod "azurefile-volume-tester-l9t6b" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 13 14:21:33.775: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vcwnf] to have phase Bound
May 13 14:21:33.878: INFO: PersistentVolumeClaim pvc-vcwnf found and phase=Bound (102.885781ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:34.191: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-xjfpc" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 13 14:21:34.294: INFO: Pod "azurefile-volume-tester-xjfpc": Phase="Pending", Reason="", readiness=false. Elapsed: 102.941762ms
May 13 14:21:36.405: INFO: Pod "azurefile-volume-tester-xjfpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214081293s
STEP: Saw pod success
May 13 14:21:36.406: INFO: Pod "azurefile-volume-tester-xjfpc" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 13 14:21:36.615: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4f246] to have phase Bound
May 13 14:21:36.718: INFO: PersistentVolumeClaim pvc-4f246 found and phase=Bound (102.750676ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:37.030: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-p6jzv" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 13 14:21:37.135: INFO: Pod "azurefile-volume-tester-p6jzv": Phase="Pending", Reason="", readiness=false. Elapsed: 105.311405ms
May 13 14:21:39.250: INFO: Pod "azurefile-volume-tester-p6jzv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219572451s
STEP: Saw pod success
May 13 14:21:39.250: INFO: Pod "azurefile-volume-tester-p6jzv" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 13 14:21:39.460: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-scq2v] to have phase Bound
May 13 14:21:39.563: INFO: PersistentVolumeClaim pvc-scq2v found and phase=Bound (103.104248ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:39.873: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-nnhqq" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 13 14:21:39.976: INFO: Pod "azurefile-volume-tester-nnhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 102.945045ms
May 13 14:21:42.085: INFO: Pod "azurefile-volume-tester-nnhqq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211801585s
STEP: Saw pod success
May 13 14:21:42.085: INFO: Pod "azurefile-volume-tester-nnhqq" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 13 14:21:42.294: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6zkm9] to have phase Bound
May 13 14:21:42.399: INFO: PersistentVolumeClaim pvc-6zkm9 found and phase=Bound (104.639639ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:42.710: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-kgntg" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 13 14:21:42.813: INFO: Pod "azurefile-volume-tester-kgntg": Phase="Pending", Reason="", readiness=false. Elapsed: 102.602608ms
May 13 14:21:44.923: INFO: Pod "azurefile-volume-tester-kgntg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212821855s
STEP: Saw pod success
May 13 14:21:44.923: INFO: Pod "azurefile-volume-tester-kgntg" satisfied condition "Succeeded or Failed"
STEP: setting up the PV
STEP: creating a PV
STEP: setting up the PVC
STEP: creating a PVC
STEP: waiting for PVC to be in phase "Bound"
May 13 14:21:45.131: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-64nvd] to have phase Bound
May 13 14:21:45.234: INFO: PersistentVolumeClaim pvc-64nvd found and phase=Bound (103.54748ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:21:45.545: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-bz7nb" in namespace "azurefile-9103" to be "Succeeded or Failed"
May 13 14:21:45.648: INFO: Pod "azurefile-volume-tester-bz7nb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.719505ms
May 13 14:21:47.759: INFO: Pod "azurefile-volume-tester-bz7nb": Phase="Running", Reason="", readiness=true. Elapsed: 2.213959115s
May 13 14:21:49.868: INFO: Pod "azurefile-volume-tester-bz7nb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.323017304s
STEP: Saw pod success
May 13 14:21:49.868: INFO: Pod "azurefile-volume-tester-bz7nb" satisfied condition "Succeeded or Failed"
May 13 14:21:49.868: INFO: deleting Pod "azurefile-9103"/"azurefile-volume-tester-bz7nb"
May 13 14:21:49.974: INFO: Pod azurefile-volume-tester-bz7nb has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-bz7nb in namespace azurefile-9103
May 13 14:21:50.093: INFO: deleting PVC "azurefile-9103"/"pvc-64nvd"
May 13 14:21:50.093: INFO: Deleting PersistentVolumeClaim "pvc-64nvd"
... skipping 143 lines ...
May 13 14:22:01.875: INFO: PersistentVolumeClaim pvc-jmlqc found but phase is Pending instead of Bound.
May 13 14:22:03.982: INFO: PersistentVolumeClaim pvc-jmlqc found and phase=Bound (2.209913617s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:22:04.296: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-n265k" in namespace "azurefile-8470" to be "Succeeded or Failed"
May 13 14:22:04.399: INFO: Pod "azurefile-volume-tester-n265k": Phase="Pending", Reason="", readiness=false. Elapsed: 103.414227ms
May 13 14:22:06.510: INFO: Pod "azurefile-volume-tester-n265k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214342419s
STEP: Saw pod success
May 13 14:22:06.510: INFO: Pod "azurefile-volume-tester-n265k" satisfied condition "Succeeded or Failed"
May 13 14:22:06.510: INFO: deleting Pod "azurefile-8470"/"azurefile-volume-tester-n265k"
May 13 14:22:06.617: INFO: Pod azurefile-volume-tester-n265k has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-n265k in namespace azurefile-8470
May 13 14:22:06.740: INFO: deleting PVC "azurefile-8470"/"pvc-jmlqc"
May 13 14:22:06.740: INFO: Deleting PersistentVolumeClaim "pvc-jmlqc"
... skipping 33 lines ...
May 13 14:22:10.456: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4w8c4] to have phase Bound
May 13 14:22:10.559: INFO: PersistentVolumeClaim pvc-4w8c4 found and phase=Bound (103.007916ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
May 13 14:22:10.871: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-jzsc7" in namespace "azurefile-7029" to be "Succeeded or Failed"
May 13 14:22:10.974: INFO: Pod "azurefile-volume-tester-jzsc7": Phase="Pending", Reason="", readiness=false. Elapsed: 102.638837ms
May 13 14:22:13.084: INFO: Pod "azurefile-volume-tester-jzsc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212782353s
STEP: Saw pod success
May 13 14:22:13.084: INFO: Pod "azurefile-volume-tester-jzsc7" satisfied condition "Succeeded or Failed"
May 13 14:22:13.084: INFO: deleting Pod "azurefile-7029"/"azurefile-volume-tester-jzsc7"
May 13 14:22:13.199: INFO: Pod azurefile-volume-tester-jzsc7 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-jzsc7 in namespace azurefile-7029
May 13 14:22:13.318: INFO: deleting PVC "azurefile-7029"/"pvc-4w8c4"
May 13 14:22:13.318: INFO: Deleting PersistentVolumeClaim "pvc-4w8c4"
... skipping 91 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0513 14:00:46.339919       1 azurefile.go:272] driver userAgent: file.csi.azure.com/e2e-fa940354ff091c41e803444a996ce8f5dad21646 gc/go1.18.1 (amd64-linux) e2e-test
I0513 14:00:46.340294       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0513 14:00:46.350625       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0513 14:00:46.350649       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0513 14:00:46.350660       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0513 14:00:46.350694       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0513 14:00:46.351217       1 azure_auth.go:245] Using AzurePublicCloud environment
I0513 14:00:46.351269       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0513 14:00:46.351332       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 72 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0513 14:00:41.960524       1 azurefile.go:272] driver userAgent: file.csi.azure.com/e2e-fa940354ff091c41e803444a996ce8f5dad21646 gc/go1.18.1 (amd64-linux) e2e-test
I0513 14:00:41.960952       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0513 14:00:41.974736       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0513 14:00:41.974755       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0513 14:00:41.974766       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0513 14:00:41.974797       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0513 14:00:41.975338       1 azure_auth.go:245] Using AzurePublicCloud environment
I0513 14:00:41.975427       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0513 14:00:41.975519       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 508 lines ...
I0513 14:18:28.448222       1 azurefile.go:780] remove tag(skip-matching) on account(f2c9a0c1f651b425eb36d94) resourceGroup(capz-fyjq3n)
I0513 14:18:28.490369       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.164845633 request="azurefile_csi_driver_controller_delete_volume" resource_group="capz-fyjq3n" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="file.csi.azure.com" volumeid="capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415" result_code="succeeded"
I0513 14:18:28.490404       1 utils.go:83] GRPC response: {}
I0513 14:18:30.763993       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0513 14:18:30.764145       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":107374182400},"name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","parameters":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","skuName":"Premium_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}}]}
I0513 14:18:30.843947       1 azure_storageaccount.go:360] Creating private dns zone(privatelink.file.core.windows.net) in resourceGroup (capz-fyjq3n)
I0513 14:19:02.135123       1 azure_privatednsclient.go:56] Received error while waiting for completion for privatedns.put.request, resourceGroup: capz-fyjq3n, error: Code="PreconditionFailed" Message="The Zone privatelink.file.core.windows.net exists already and hence cannot be created again."
I0513 14:19:02.135166       1 azure_storageaccount.go:365] private dns zone(privatelink.file.core.windows.net) in resourceGroup (capz-fyjq3n) already exists
I0513 14:19:02.135176       1 azure_storageaccount.go:374] Creating virtual link for vnet(fe0bee15b622942e384b5ef-vnetlink) and DNS Zone(privatelink.file.core.windows.net) in resourceGroup(capz-fyjq3n)
I0513 14:19:03.554016       1 azure_storageaccount.go:252] azure - no matching account found, begin to create a new account fe0bee15b622942e384b5ef in resource group capz-fyjq3n, location: uksouth, accountType: Premium_LRS, accountKind: FileStorage, tags: map[k8s-azure-created-by:azure]
I0513 14:19:03.554060       1 azure_storageaccount.go:273] set AllowBlobPublicAccess(false) for storage account(fe0bee15b622942e384b5ef)
I0513 14:19:23.035677       1 azure_storageaccount.go:330] Creating private endpoint(fe0bee15b622942e384b5ef-pvtendpoint) for account (fe0bee15b622942e384b5ef)
I0513 14:20:03.927507       1 azure_storageaccount.go:387] Creating private DNS zone group(fe0bee15b622942e384b5ef-dnszonegroup) with privateEndpoint(fe0bee15b622942e384b5ef-pvtendpoint), vNetName(capz-fyjq3n-vnet), resourceGroup(capz-fyjq3n)
... skipping 153 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0513 14:00:38.172792       1 azurefile.go:272] driver userAgent: file.csi.azure.com/e2e-fa940354ff091c41e803444a996ce8f5dad21646 gc/go1.18.1 (amd64-linux) e2e-test
I0513 14:00:38.173351       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0513 14:00:38.224791       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0513 14:00:38.224823       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0513 14:00:38.224838       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0513 14:00:38.224873       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0513 14:00:38.225717       1 azure_auth.go:245] Using AzurePublicCloud environment
I0513 14:00:38.225770       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0513 14:00:38.225847       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 40 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0513 14:00:35.965776       1 azurefile.go:272] driver userAgent: file.csi.azure.com/e2e-fa940354ff091c41e803444a996ce8f5dad21646 gc/go1.18.1 (amd64-linux) e2e-test
I0513 14:00:35.966235       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0513 14:00:35.983118       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0513 14:00:35.983141       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0513 14:00:35.983152       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0513 14:00:35.983195       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0513 14:00:35.983830       1 azure_auth.go:245] Using AzurePublicCloud environment
I0513 14:00:35.983881       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0513 14:00:35.983955       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 56 lines ...
W0513 14:06:52.768065       1 mount_helper_common.go:34] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a1ae3944-e34e-40d4-abfa-5bb86a5c4696/proxy-mount
I0513 14:06:52.768077       1 nodeserver.go:361] NodeUnstageVolume: unmount volume capz-fyjq3n#f15605429459a41f7b2569a#pvc-a1ae3944-e34e-40d4-abfa-5bb86a5c4696###azurefile-5356 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a1ae3944-e34e-40d4-abfa-5bb86a5c4696/globalmount successfully
I0513 14:06:52.768087       1 utils.go:83] GRPC response: {}
I0513 14:17:24.266190       1 utils.go:76] GRPC call: /csi.v1.Node/NodePublishVolume
I0513 14:17:24.266221       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345/volumes/kubernetes.io~csi/test-volume-1/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"azurefile-volume-tester-pgwmk","csi.storage.k8s.io/pod.namespace":"azurefile-1166","csi.storage.k8s.io/pod.uid":"4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345","csi.storage.k8s.io/serviceAccount.name":"default","mountOptions":"cache=singleclient","secretName":"smbcreds","server":"smb-server.default.svc.cluster.local","shareName":"share"},"volume_id":"csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9"}
I0513 14:17:24.266337       1 nodeserver.go:68] NodePublishVolume: ephemeral volume(csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9) mount on /var/lib/kubelet/pods/4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345/volumes/kubernetes.io~csi/test-volume-1/mount, VolumeContext: map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-pgwmk csi.storage.k8s.io/pod.namespace:azurefile-1166 csi.storage.k8s.io/pod.uid:4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345 csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:smbcreds secretnamespace:azurefile-1166 server:smb-server.default.svc.cluster.local shareName:share storageaccount:]
W0513 14:17:24.266360       1 azurefile.go:562] parsing volumeID(csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9) return with error: error parsing volume id: "csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9", should at least contain two #
I0513 14:17:24.274994       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/pods/4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345/volumes/kubernetes.io~csi/test-volume-1/mount) fstype() volumeID(csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9) context(map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-pgwmk csi.storage.k8s.io/pod.namespace:azurefile-1166 csi.storage.k8s.io/pod.uid:4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345 csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:smbcreds secretnamespace:azurefile-1166 server:smb-server.default.svc.cluster.local shareName:share storageaccount:]) mountflags([]) mountOptions([actimeo=30 cache=singleclient dir_mode=0777 file_mode=0777 mfsymlinks]) volumeMountGroup()
I0513 14:17:24.275345       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o actimeo=30,cache=singleclient,dir_mode=0777,file_mode=0777,mfsymlinks,<masked> //smb-server.default.svc.cluster.local/share /var/lib/kubelet/pods/4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345/volumes/kubernetes.io~csi/test-volume-1/mount)
I0513 14:17:24.329667       1 nodeserver.go:305] volume(csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9) mount //smb-server.default.svc.cluster.local/share on /var/lib/kubelet/pods/4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345/volumes/kubernetes.io~csi/test-volume-1/mount succeeded
I0513 14:17:24.329832       1 utils.go:83] GRPC response: {}
I0513 14:17:26.481889       1 utils.go:76] GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0513 14:17:26.481915       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/4ba77eb9-5a20-44dd-a2e3-ebe92f2b3345/volumes/kubernetes.io~csi/test-volume-1/mount","volume_id":"csi-1f8ef2f8a7728989c39b9a83af388e8b73d5491e3d01d6ba9f535b710fd904d9"}
... skipping 41 lines ...
Go Version: go1.18.1
Platform: linux/amd64

Streaming logs below:
I0513 14:00:41.239704       1 azurefile.go:272] driver userAgent: file.csi.azure.com/e2e-fa940354ff091c41e803444a996ce8f5dad21646 gc/go1.18.1 (amd64-linux) e2e-test
I0513 14:00:41.240229       1 azure.go:71] reading cloud config from secret kube-system/azure-cloud-provider
W0513 14:00:41.253652       1 azure.go:78] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0513 14:00:41.253703       1 azure.go:83] could not read cloud config from secret kube-system/azure-cloud-provider
I0513 14:00:41.253731       1 azure.go:93] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0513 14:00:41.253799       1 azure.go:101] read cloud config from file: /etc/kubernetes/azure.json successfully
I0513 14:00:41.254555       1 azure_auth.go:245] Using AzurePublicCloud environment
I0513 14:00:41.254605       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0513 14:00:41.254668       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 28 lines ...
I0513 14:00:42.296675       1 utils.go:77] GRPC request: {}
I0513 14:00:42.296778       1 utils.go:83] GRPC response: {"node_id":"capz-fyjq3n-md-0-9trgq"}
I0513 14:03:02.761897       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:03:02.761939       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock","vers=3.1.1"]}},"access_mode":{"mode":7}},"volume_context":{"accessTier":"Hot","csi.storage.k8s.io/pv/name":"pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c","csi.storage.k8s.io/pvc/name":"pvc-b84mm","csi.storage.k8s.io/pvc/namespace":"azurefile-2540","enableLargeFileshares":"true","networkEndpointType":"privateEndpoint","secretName":"secret-1652450482","secretNamespace":"kube-system","secretnamespace":"kube-system","server":"f6616e483dfe541a194534a.privatelink.file.core.windows.net","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system"}
I0513 14:03:02.773751       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount) fstype() volumeID(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) context(map[accessTier:Hot csi.storage.k8s.io/pv/name:pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c csi.storage.k8s.io/pvc/name:pvc-b84mm csi.storage.k8s.io/pvc/namespace:azurefile-2540 enableLargeFileshares:true networkEndpointType:privateEndpoint secretName:secret-1652450482 secretNamespace:kube-system secretnamespace:kube-system server:f6616e483dfe541a194534a.privatelink.file.core.windows.net skuName:Standard_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1 actimeo=30]) volumeMountGroup()
I0513 14:03:02.774156       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount)
E0513 14:03:02.796072       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error

E0513 14:03:02.796131       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) mount //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error
I0513 14:03:03.344750       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:03:03.344789       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock","vers=3.1.1"]}},"access_mode":{"mode":7}},"volume_context":{"accessTier":"Hot","csi.storage.k8s.io/pv/name":"pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c","csi.storage.k8s.io/pvc/name":"pvc-b84mm","csi.storage.k8s.io/pvc/namespace":"azurefile-2540","enableLargeFileshares":"true","networkEndpointType":"privateEndpoint","secretName":"secret-1652450482","secretNamespace":"kube-system","secretnamespace":"kube-system","server":"f6616e483dfe541a194534a.privatelink.file.core.windows.net","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system"}
I0513 14:03:03.345156       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount) fstype() volumeID(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) context(map[accessTier:Hot csi.storage.k8s.io/pv/name:pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c csi.storage.k8s.io/pvc/name:pvc-b84mm csi.storage.k8s.io/pvc/namespace:azurefile-2540 enableLargeFileshares:true networkEndpointType:privateEndpoint secretName:secret-1652450482 secretNamespace:kube-system secretnamespace:kube-system server:f6616e483dfe541a194534a.privatelink.file.core.windows.net skuName:Standard_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1 actimeo=30]) volumeMountGroup()
I0513 14:03:03.345768       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount)
E0513 14:03:03.359202       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error

E0513 14:03:03.359254       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) mount //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error
I0513 14:03:04.450663       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:03:04.450691       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock","vers=3.1.1"]}},"access_mode":{"mode":7}},"volume_context":{"accessTier":"Hot","csi.storage.k8s.io/pv/name":"pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c","csi.storage.k8s.io/pvc/name":"pvc-b84mm","csi.storage.k8s.io/pvc/namespace":"azurefile-2540","enableLargeFileshares":"true","networkEndpointType":"privateEndpoint","secretName":"secret-1652450482","secretNamespace":"kube-system","secretnamespace":"kube-system","server":"f6616e483dfe541a194534a.privatelink.file.core.windows.net","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system"}
I0513 14:03:04.450861       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount) fstype() volumeID(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) context(map[accessTier:Hot csi.storage.k8s.io/pv/name:pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c csi.storage.k8s.io/pvc/name:pvc-b84mm csi.storage.k8s.io/pvc/namespace:azurefile-2540 enableLargeFileshares:true networkEndpointType:privateEndpoint secretName:secret-1652450482 secretNamespace:kube-system secretnamespace:kube-system server:f6616e483dfe541a194534a.privatelink.file.core.windows.net skuName:Standard_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1 actimeo=30]) volumeMountGroup()
I0513 14:03:04.451203       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount)
E0513 14:03:04.472328       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error

E0513 14:03:04.472368       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) mount //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error
I0513 14:03:06.565992       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:03:06.566032       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock","vers=3.1.1"]}},"access_mode":{"mode":7}},"volume_context":{"accessTier":"Hot","csi.storage.k8s.io/pv/name":"pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c","csi.storage.k8s.io/pvc/name":"pvc-b84mm","csi.storage.k8s.io/pvc/namespace":"azurefile-2540","enableLargeFileshares":"true","networkEndpointType":"privateEndpoint","secretName":"secret-1652450482","secretNamespace":"kube-system","secretnamespace":"kube-system","server":"f6616e483dfe541a194534a.privatelink.file.core.windows.net","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system"}
I0513 14:03:06.566347       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount) fstype() volumeID(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) context(map[accessTier:Hot csi.storage.k8s.io/pv/name:pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c csi.storage.k8s.io/pvc/name:pvc-b84mm csi.storage.k8s.io/pvc/namespace:azurefile-2540 enableLargeFileshares:true networkEndpointType:privateEndpoint secretName:secret-1652450482 secretNamespace:kube-system secretnamespace:kube-system server:f6616e483dfe541a194534a.privatelink.file.core.windows.net skuName:Standard_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1 actimeo=30]) volumeMountGroup()
I0513 14:03:06.566646       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount)
E0513 14:03:06.578767       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error

E0513 14:03:06.578807       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) mount //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error
I0513 14:03:10.597975       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:03:10.598002       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock","vers=3.1.1"]}},"access_mode":{"mode":7}},"volume_context":{"accessTier":"Hot","csi.storage.k8s.io/pv/name":"pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c","csi.storage.k8s.io/pvc/name":"pvc-b84mm","csi.storage.k8s.io/pvc/namespace":"azurefile-2540","enableLargeFileshares":"true","networkEndpointType":"privateEndpoint","secretName":"secret-1652450482","secretNamespace":"kube-system","secretnamespace":"kube-system","server":"f6616e483dfe541a194534a.privatelink.file.core.windows.net","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system"}
I0513 14:03:10.598194       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount) fstype() volumeID(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) context(map[accessTier:Hot csi.storage.k8s.io/pv/name:pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c csi.storage.k8s.io/pvc/name:pvc-b84mm csi.storage.k8s.io/pvc/namespace:azurefile-2540 enableLargeFileshares:true networkEndpointType:privateEndpoint secretName:secret-1652450482 secretNamespace:kube-system secretnamespace:kube-system server:f6616e483dfe541a194534a.privatelink.file.core.windows.net skuName:Standard_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1 actimeo=30]) volumeMountGroup()
I0513 14:03:10.598487       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount)
E0513 14:03:10.610445       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error

E0513 14:03:10.610483       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) mount //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount
Output: mount error: could not resolve address for f6616e483dfe541a194534a.privatelink.file.core.windows.net: Unknown error
I0513 14:03:18.665553       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:03:18.665576       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","nosharesock","vers=3.1.1"]}},"access_mode":{"mode":7}},"volume_context":{"accessTier":"Hot","csi.storage.k8s.io/pv/name":"pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c","csi.storage.k8s.io/pvc/name":"pvc-b84mm","csi.storage.k8s.io/pvc/namespace":"azurefile-2540","enableLargeFileshares":"true","networkEndpointType":"privateEndpoint","secretName":"secret-1652450482","secretNamespace":"kube-system","secretnamespace":"kube-system","server":"f6616e483dfe541a194534a.privatelink.file.core.windows.net","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system"}
I0513 14:03:18.666023       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount) fstype() volumeID(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) context(map[accessTier:Hot csi.storage.k8s.io/pv/name:pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c csi.storage.k8s.io/pvc/name:pvc-b84mm csi.storage.k8s.io/pvc/namespace:azurefile-2540 enableLargeFileshares:true networkEndpointType:privateEndpoint secretName:secret-1652450482 secretNamespace:kube-system secretnamespace:kube-system server:f6616e483dfe541a194534a.privatelink.file.core.windows.net skuName:Standard_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1]) mountOptions([dir_mode=0777 file_mode=0777 uid=0 gid=0 mfsymlinks cache=strict nosharesock vers=3.1.1 actimeo=30]) volumeMountGroup()
I0513 14:03:18.666365       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,vers=3.1.1,actimeo=30,<masked> //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount)
I0513 14:03:18.892407       1 nodeserver.go:305] volume(capz-fyjq3n#f6616e483dfe541a194534a#pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c###kube-system) mount //f6616e483dfe541a194534a.privatelink.file.core.windows.net/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4b6bdcf-08c5-4491-a017-adb8824cc17c/globalmount succeeded
I0513 14:03:18.892441       1 utils.go:83] GRPC response: {}
... skipping 242 lines ...
I0513 14:08:31.561491       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:08:31.650242       1 mount_linux.go:490] Output: ""
I0513 14:08:31.650271       1 mount_linux.go:449] Disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd]
I0513 14:08:32.342421       1 mount_linux.go:459] Disk successfully formatted (mkfs): ext4 - /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:32.342456       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:32.342479       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:08:32.387845       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:08:32.387888       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:32.973850       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:08:32.973876       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:08:32.974080       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:08:32.983121       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:08:32.983163       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:32.983506       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:08:32.983529       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:08:33.061894       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:08:33.061922       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:33.181655       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:33.181691       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:08:33.221107       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:08:33.221185       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:34.293990       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:08:34.294027       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:08:34.294207       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:08:34.303260       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:08:34.303290       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:34.303608       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:08:34.303625       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:08:34.387744       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:08:34.387775       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:34.505065       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:34.505111       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:08:34.539224       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:08:34.539262       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:36.609419       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:08:36.609443       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:08:36.609630       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:08:36.618711       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:08:36.618752       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:36.619106       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:08:36.619127       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:08:36.704386       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:08:36.704425       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:36.825795       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:36.825837       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:08:36.855163       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:08:36.855399       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:40.947542       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:08:40.947565       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:08:40.947735       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:08:40.956481       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:08:40.956524       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:40.956886       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:08:40.956908       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:08:41.038401       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:08:41.038429       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:41.152013       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:41.152057       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:08:41.184507       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:08:41.184545       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:49.213156       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:08:49.213195       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:08:49.213403       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:08:49.223048       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:08:49.223128       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:49.223816       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:08:49.223861       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:08:49.307602       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:08:49.307627       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:08:49.422562       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:08:49.422623       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:08:49.455748       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:08:49.455794       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:09:05.549607       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:09:05.549634       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:09:05.549963       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:09:05.567042       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:09:05.567085       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:09:05.567419       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:09:05.567441       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:09:05.653397       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:09:05.653437       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:09:05.771719       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:09:05.771813       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:09:05.814858       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:09:05.814900       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:09:37.906216       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:09:37.906243       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d","csi.storage.k8s.io/pvc/name":"pvc-57kfg","csi.storage.k8s.io/pvc/namespace":"azurefile-4376","diskname":"pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd","fsType":"ext4","secretnamespace":"azurefile-4376","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376"}
I0513 14:09:37.906452       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376) context(map[csi.storage.k8s.io/pv/name:pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d csi.storage.k8s.io/pvc/name:pvc-57kfg csi.storage.k8s.io/pvc/namespace:azurefile-4376 diskname:pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd fsType:ext4 secretnamespace:azurefile-4376 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:09:37.924221       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount
I0513 14:09:37.924267       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:09:37.924656       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d#pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd##azurefile-4376 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:09:37.924680       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd])
I0513 14:09:38.013692       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd\nTYPE=ext4\n"
I0513 14:09:38.013723       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:09:38.132712       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
I0513 14:09:38.132754       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount)
E0513 14:09:38.159501       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:09:38.159541       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46d0560f-d056-43d8-9bda-80c7c5fa679d/proxy-mount/pvcd-46d0560f-d056-43d8-9bda-80c7c5fa679d.vhd
I0513 14:10:41.935932       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:10:41.935959       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:10:41.936149       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync actimeo=30 mfsymlinks file_mode=0777]) volumeMountGroup()
I0513 14:10:41.936685       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nostrictsync,actimeo=30,mfsymlinks,file_mode=0777,<masked> //fd234bc7a096348b089e640.file.core.windows.net/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount)
I0513 14:10:42.028229       1 nodeserver.go:305] volume(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) mount //fd234bc7a096348b089e640.file.core.windows.net/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount succeeded
I0513 14:10:42.028679       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:10:42.028703       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:10:42.114803       1 mount_linux.go:490] Output: ""
I0513 14:10:42.114833       1 mount_linux.go:449] Disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd]
I0513 14:10:42.810117       1 mount_linux.go:459] Disk successfully formatted (mkfs): ext4 - /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:42.810148       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:42.810171       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:10:42.855825       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:10:42.855878       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:43.447760       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:10:43.447784       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:10:43.447963       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:10:43.457070       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:10:43.457112       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:43.457477       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:10:43.457498       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:10:43.541446       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:10:43.541476       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:43.682701       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:43.682740       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:10:43.714473       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:10:43.714515       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:44.763419       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:10:44.763456       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:10:44.763719       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:10:44.774737       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:10:44.774779       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:44.775155       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:10:44.775180       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:10:44.858727       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:10:44.858753       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:44.999027       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:44.999076       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:10:45.032194       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:10:45.032239       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:47.083521       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:10:47.083544       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:10:47.083761       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync mfsymlinks file_mode=0777 actimeo=30]) volumeMountGroup()
I0513 14:10:47.093593       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:10:47.093634       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:47.093999       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:10:47.094020       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:10:47.174502       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:10:47.174534       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:47.329394       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:47.329453       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:10:47.368301       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:10:47.368357       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:51.420885       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:10:51.420911       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:10:51.421185       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:10:51.430828       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:10:51.430901       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:51.431338       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:10:51.431361       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:10:51.514555       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:10:51.514580       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:51.660232       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:51.660270       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:10:51.698326       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:10:51.698360       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:10:59.791009       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:10:59.791037       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:10:59.791428       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync actimeo=30 mfsymlinks file_mode=0777]) volumeMountGroup()
I0513 14:10:59.801521       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:10:59.801625       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:10:59.802121       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:10:59.802143       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:10:59.896454       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:10:59.896479       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:11:00.041770       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:11:00.041814       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:11:00.076147       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:11:00.076189       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:11:16.126031       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:11:16.126058       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:11:16.126315       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:11:16.147092       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:11:16.147201       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:11:16.147625       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:11:16.147647       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:11:16.230840       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:11:16.230869       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:11:16.381730       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:11:16.381826       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:11:16.417021       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:11:16.417068       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:11:48.498337       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:11:48.498364       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c725d72e-105b-4258-bd6c-bd7bda73a905","csi.storage.k8s.io/pvc/name":"pvc-lh457","csi.storage.k8s.io/pvc/namespace":"azurefile-7996","diskname":"pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd","fsType":"ext4","secretnamespace":"azurefile-7996","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996"}
I0513 14:11:48.498572       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount) fstype(ext4) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996) context(map[csi.storage.k8s.io/pv/name:pvc-c725d72e-105b-4258-bd6c-bd7bda73a905 csi.storage.k8s.io/pvc/name:pvc-lh457 csi.storage.k8s.io/pvc/namespace:azurefile-7996 diskname:pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd fsType:ext4 secretnamespace:azurefile-7996 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([invalid mount options]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync file_mode=0777 actimeo=30 mfsymlinks]) volumeMountGroup()
I0513 14:11:48.515712       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount
I0513 14:11:48.515754       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:11:48.516166       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905#pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd##azurefile-7996 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd with mount options([barrier=1 errors=remount-ro invalid loop mount noatime options])
I0513 14:11:48.516190       1 mount_linux.go:487] Attempting to determine if disk "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd])
I0513 14:11:48.607483       1 mount_linux.go:490] Output: "DEVNAME=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd\nTYPE=ext4\n"
I0513 14:11:48.607514       1 mount_linux.go:376] Checking for issues with fsck on disk: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:11:48.750838       1 mount_linux.go:477] Attempting to mount disk /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
I0513 14:11:48.750883       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount)
E0513 14:11:48.785242       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o barrier=1,errors=remount-ro,invalid,loop,mount,noatime,options,defaults /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error.

E0513 14:11:48.785288       1 utils.go:81] GRPC error: rpc error: code = Internal desc = could not format /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/globalmount and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c725d72e-105b-4258-bd6c-bd7bda73a905/proxy-mount/pvcd-c725d72e-105b-4258-bd6c-bd7bda73a905.vhd
I0513 14:12:52.142416       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:12:52.142459       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5","csi.storage.k8s.io/pvc/name":"pvc-r4xnp","csi.storage.k8s.io/pvc/namespace":"azurefile-59","diskname":"pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd","fsType":"xfs","secretnamespace":"azurefile-59","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fd234bc7a096348b089e640#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd##azurefile-59"}
I0513 14:12:52.142740       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5/proxy-mount) fstype(xfs) volumeID(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd##azurefile-59) context(map[csi.storage.k8s.io/pv/name:pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5 csi.storage.k8s.io/pvc/name:pvc-r4xnp csi.storage.k8s.io/pvc/namespace:azurefile-59 diskname:pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd fsType:xfs secretnamespace:azurefile-59 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([]) mountOptions([dir_mode=0777,file_mode=0777,cache=strict,actimeo=30 nostrictsync mfsymlinks file_mode=0777 actimeo=30]) volumeMountGroup()
I0513 14:12:52.143263       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nostrictsync,mfsymlinks,file_mode=0777,actimeo=30,<masked> //fd234bc7a096348b089e640.file.core.windows.net/pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5/proxy-mount)
I0513 14:12:52.232412       1 nodeserver.go:305] volume(capz-fyjq3n#fd234bc7a096348b089e640#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd##azurefile-59) mount //fd234bc7a096348b089e640.file.core.windows.net/pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5/proxy-mount succeeded
I0513 14:12:52.233086       1 nodeserver.go:325] NodeStageVolume: volume capz-fyjq3n#fd234bc7a096348b089e640#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5#pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd##azurefile-59 formatting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5/globalmount and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-90470616-7848-4dfc-aca3-4e6e4bdf48b5/proxy-mount/pvcd-90470616-7848-4dfc-aca3-4e6e4bdf48b5.vhd with mount options([loop])
... skipping 412 lines ...
I0513 14:16:40.666331       1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-99f19853-0efa-470c-8a99-7a1bbb955458/globalmount /var/lib/kubelet/pods/56291181-bf81-4196-8699-35519d43f7d5/volumes/kubernetes.io~csi/pvc-99f19853-0efa-470c-8a99-7a1bbb955458/mount)
I0513 14:16:40.669487       1 nodeserver.go:116] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-99f19853-0efa-470c-8a99-7a1bbb955458/globalmount at /var/lib/kubelet/pods/56291181-bf81-4196-8699-35519d43f7d5/volumes/kubernetes.io~csi/pvc-99f19853-0efa-470c-8a99-7a1bbb955458/mount successfully
I0513 14:16:40.669505       1 utils.go:83] GRPC response: {}
I0513 14:17:18.269033       1 utils.go:76] GRPC call: /csi.v1.Node/NodePublishVolume
I0513 14:17:18.269059       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/84ade85a-3d58-4a2e-af9c-2a5526341c6f/volumes/kubernetes.io~csi/test-volume-1/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"azurefile-volume-tester-2pvsb","csi.storage.k8s.io/pod.namespace":"azurefile-4801","csi.storage.k8s.io/pod.uid":"84ade85a-3d58-4a2e-af9c-2a5526341c6f","csi.storage.k8s.io/serviceAccount.name":"default","mountOptions":"cache=singleclient","secretName":"azure-storage-account-f15605429459a41f7b2569a-secret","server":"","shareName":"csi-inline-smb-volume"},"volume_id":"csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf"}
I0513 14:17:18.269190       1 nodeserver.go:68] NodePublishVolume: ephemeral volume(csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf) mount on /var/lib/kubelet/pods/84ade85a-3d58-4a2e-af9c-2a5526341c6f/volumes/kubernetes.io~csi/test-volume-1/mount, VolumeContext: map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-2pvsb csi.storage.k8s.io/pod.namespace:azurefile-4801 csi.storage.k8s.io/pod.uid:84ade85a-3d58-4a2e-af9c-2a5526341c6f csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:azure-storage-account-f15605429459a41f7b2569a-secret secretnamespace:azurefile-4801 server: shareName:csi-inline-smb-volume storageaccount:]
W0513 14:17:18.269213       1 azurefile.go:562] parsing volumeID(csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf) return with error: error parsing volume id: "csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf", should at least contain two #
I0513 14:17:18.273078       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/pods/84ade85a-3d58-4a2e-af9c-2a5526341c6f/volumes/kubernetes.io~csi/test-volume-1/mount) fstype() volumeID(csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf) context(map[csi.storage.k8s.io/ephemeral:true csi.storage.k8s.io/pod.name:azurefile-volume-tester-2pvsb csi.storage.k8s.io/pod.namespace:azurefile-4801 csi.storage.k8s.io/pod.uid:84ade85a-3d58-4a2e-af9c-2a5526341c6f csi.storage.k8s.io/serviceAccount.name:default getaccountkeyfromsecret:true mountOptions:cache=singleclient secretName:azure-storage-account-f15605429459a41f7b2569a-secret secretnamespace:azurefile-4801 server: shareName:csi-inline-smb-volume storageaccount:]) mountflags([]) mountOptions([actimeo=30 cache=singleclient dir_mode=0777 file_mode=0777 mfsymlinks]) volumeMountGroup()
I0513 14:17:18.273446       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o actimeo=30,cache=singleclient,dir_mode=0777,file_mode=0777,mfsymlinks,<masked> //f15605429459a41f7b2569a.file.core.windows.net/csi-inline-smb-volume /var/lib/kubelet/pods/84ade85a-3d58-4a2e-af9c-2a5526341c6f/volumes/kubernetes.io~csi/test-volume-1/mount)
I0513 14:17:18.397648       1 nodeserver.go:305] volume(csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf) mount //f15605429459a41f7b2569a.file.core.windows.net/csi-inline-smb-volume on /var/lib/kubelet/pods/84ade85a-3d58-4a2e-af9c-2a5526341c6f/volumes/kubernetes.io~csi/test-volume-1/mount succeeded
I0513 14:17:18.397680       1 utils.go:83] GRPC response: {}
I0513 14:17:21.295381       1 utils.go:76] GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0513 14:17:21.295408       1 utils.go:77] GRPC request: {"target_path":"/var/lib/kubelet/pods/84ade85a-3d58-4a2e-af9c-2a5526341c6f/volumes/kubernetes.io~csi/test-volume-1/mount","volume_id":"csi-b1d45b9d804ac301bcfb922c7d76013d426451191d8050a5dde24cb271c0efdf"}
... skipping 20 lines ...
I0513 14:17:45.215639       1 utils.go:83] GRPC response: {}
I0513 14:18:23.787537       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:18:23.787560       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514","csi.storage.k8s.io/pvc/name":"pvc-cttsh","csi.storage.k8s.io/pvc/namespace":"azurefile-4415","mountPermissions":"0755","protocol":"nfs","rootSquashType":"RootSquash","secretnamespace":"azurefile-4415","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415"}
I0513 14:18:23.787721       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount) fstype() volumeID(capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415) context(map[csi.storage.k8s.io/pv/name:pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514 csi.storage.k8s.io/pvc/name:pvc-cttsh csi.storage.k8s.io/pvc/namespace:azurefile-4415 mountPermissions:0755 protocol:nfs rootSquashType:RootSquash secretnamespace:azurefile-4415 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:18:23.788179       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 f2c9a0c1f651b425eb36d94.file.core.windows.net:/f2c9a0c1f651b425eb36d94/pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount)
I0513 14:18:24.393547       1 utils.go:218] chmod targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount, mode:020000000777) with permissions(0755)
E0513 14:18:24.396832       1 utils.go:81] GRPC error: rpc error: code = Internal desc = chmod /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount: operation not permitted
I0513 14:18:24.996434       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:18:24.996457       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514","csi.storage.k8s.io/pvc/name":"pvc-cttsh","csi.storage.k8s.io/pvc/namespace":"azurefile-4415","mountPermissions":"0755","protocol":"nfs","rootSquashType":"RootSquash","secretnamespace":"azurefile-4415","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415"}
I0513 14:18:24.996646       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount) fstype() volumeID(capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415) context(map[csi.storage.k8s.io/pv/name:pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514 csi.storage.k8s.io/pvc/name:pvc-cttsh csi.storage.k8s.io/pvc/namespace:azurefile-4415 mountPermissions:0755 protocol:nfs rootSquashType:RootSquash secretnamespace:azurefile-4415 skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:18:25.005083       1 nodeserver.go:489] already mounted to target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount
I0513 14:18:25.005138       1 nodeserver.go:282] NodeStageVolume: volume capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415 is already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount
I0513 14:18:25.005182       1 utils.go:83] GRPC response: {}
... skipping 21 lines ...
I0513 14:18:27.750379       1 nodeserver.go:361] NodeUnstageVolume: unmount volume capz-fyjq3n#f2c9a0c1f651b425eb36d94#pvcn-9aeab0dc-300e-41a8-9573-bb9d91c8c514###azurefile-4415 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9aeab0dc-300e-41a8-9573-bb9d91c8c514/globalmount successfully
I0513 14:18:27.750394       1 utils.go:83] GRPC response: {}
I0513 14:20:06.204975       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:20:06.205013       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","secretnamespace":"azurefile-6720","server":"fe0bee15b622942e384b5ef.privatelink.file.core.windows.net","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720"}
I0513 14:20:06.205285       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) fstype() volumeID(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) context(map[csi.storage.k8s.io/pv/name:pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 csi.storage.k8s.io/pvc/name:pvc-fkqr9 csi.storage.k8s.io/pvc/namespace:azurefile-6720 mountPermissions:0 networkEndpointType:privateEndpoint protocol:nfs rootSquashType:AllSquash secretnamespace:azurefile-6720 server:fe0bee15b622942e384b5ef.privatelink.file.core.windows.net skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:20:06.205957       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount)
E0513 14:20:06.239924       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known

E0513 14:20:06.239961       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) mount fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known
I0513 14:20:06.808217       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:20:06.808243       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","secretnamespace":"azurefile-6720","server":"fe0bee15b622942e384b5ef.privatelink.file.core.windows.net","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720"}
I0513 14:20:06.808411       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) fstype() volumeID(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) context(map[csi.storage.k8s.io/pv/name:pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 csi.storage.k8s.io/pvc/name:pvc-fkqr9 csi.storage.k8s.io/pvc/namespace:azurefile-6720 mountPermissions:0 networkEndpointType:privateEndpoint protocol:nfs rootSquashType:AllSquash secretnamespace:azurefile-6720 server:fe0bee15b622942e384b5ef.privatelink.file.core.windows.net skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:20:06.808852       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount)
E0513 14:20:06.844300       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known

E0513 14:20:06.844353       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) mount fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known
I0513 14:20:07.922506       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:20:07.922533       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","secretnamespace":"azurefile-6720","server":"fe0bee15b622942e384b5ef.privatelink.file.core.windows.net","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720"}
I0513 14:20:07.923244       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) fstype() volumeID(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) context(map[csi.storage.k8s.io/pv/name:pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 csi.storage.k8s.io/pvc/name:pvc-fkqr9 csi.storage.k8s.io/pvc/namespace:azurefile-6720 mountPermissions:0 networkEndpointType:privateEndpoint protocol:nfs rootSquashType:AllSquash secretnamespace:azurefile-6720 server:fe0bee15b622942e384b5ef.privatelink.file.core.windows.net skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:20:07.923990       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount)
E0513 14:20:07.937758       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known

E0513 14:20:07.937796       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) mount fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known
I0513 14:20:09.970231       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:20:09.970261       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","secretnamespace":"azurefile-6720","server":"fe0bee15b622942e384b5ef.privatelink.file.core.windows.net","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720"}
I0513 14:20:09.970440       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) fstype() volumeID(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) context(map[csi.storage.k8s.io/pv/name:pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 csi.storage.k8s.io/pvc/name:pvc-fkqr9 csi.storage.k8s.io/pvc/namespace:azurefile-6720 mountPermissions:0 networkEndpointType:privateEndpoint protocol:nfs rootSquashType:AllSquash secretnamespace:azurefile-6720 server:fe0bee15b622942e384b5ef.privatelink.file.core.windows.net skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:20:09.970847       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount)
E0513 14:20:09.991542       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known

E0513 14:20:09.991581       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) mount fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known
I0513 14:20:14.025822       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:20:14.026056       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","secretnamespace":"azurefile-6720","server":"fe0bee15b622942e384b5ef.privatelink.file.core.windows.net","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720"}
I0513 14:20:14.026214       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) fstype() volumeID(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) context(map[csi.storage.k8s.io/pv/name:pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 csi.storage.k8s.io/pvc/name:pvc-fkqr9 csi.storage.k8s.io/pvc/namespace:azurefile-6720 mountPermissions:0 networkEndpointType:privateEndpoint protocol:nfs rootSquashType:AllSquash secretnamespace:azurefile-6720 server:fe0bee15b622942e384b5ef.privatelink.file.core.windows.net skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:20:14.028052       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount)
E0513 14:20:14.047196       1 mount_linux.go:195] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known

E0513 14:20:14.047236       1 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) mount fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount failed with mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount
Output: mount.nfs: Failed to resolve server fe0bee15b622942e384b5ef.privatelink.file.core.windows.net: Name or service not known
I0513 14:20:22.098157       1 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0513 14:20:22.098211       1 utils.go:77] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nconnect=8","rsize=1048576","wsize=1048576"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2","csi.storage.k8s.io/pvc/name":"pvc-fkqr9","csi.storage.k8s.io/pvc/namespace":"azurefile-6720","mountPermissions":"0","networkEndpointType":"privateEndpoint","protocol":"nfs","rootSquashType":"AllSquash","secretnamespace":"azurefile-6720","server":"fe0bee15b622942e384b5ef.privatelink.file.core.windows.net","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652450442365-8081-file.csi.azure.com"},"volume_id":"capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720"}
I0513 14:20:22.098365       1 nodeserver.go:275] cifsMountPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) fstype() volumeID(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) context(map[csi.storage.k8s.io/pv/name:pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 csi.storage.k8s.io/pvc/name:pvc-fkqr9 csi.storage.k8s.io/pvc/namespace:azurefile-6720 mountPermissions:0 networkEndpointType:privateEndpoint protocol:nfs rootSquashType:AllSquash secretnamespace:azurefile-6720 server:fe0bee15b622942e384b5ef.privatelink.file.core.windows.net skuName:Premium_LRS storage.kubernetes.io/csiProvisionerIdentity:1652450442365-8081-file.csi.azure.com]) mountflags([nconnect=8 rsize=1048576 wsize=1048576]) mountOptions([nconnect=8 rsize=1048576 vers=4,minorversion=1,sec=sys wsize=1048576]) volumeMountGroup()
I0513 14:20:22.098767       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t nfs -o nconnect=8,rsize=1048576,vers=4,minorversion=1,sec=sys,wsize=1048576 fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount)
I0513 14:20:22.253675       1 nodeserver.go:302] skip chmod on targetPath(/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount) since mountPermissions is set as 0
I0513 14:20:22.253719       1 nodeserver.go:305] volume(capz-fyjq3n#fe0bee15b622942e384b5ef#pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2###azurefile-6720) mount fe0bee15b622942e384b5ef.privatelink.file.core.windows.net:/fe0bee15b622942e384b5ef/pvcn-926f907c-1d7d-4e79-b6e4-edd3450c2aa2 on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-926f907c-1d7d-4e79-b6e4-edd3450c2aa2/globalmount succeeded
... skipping 969 lines ...
2022/05/13 14:23:41 ===================================================
STEP: GetAccountNumByResourceGroup(capz-fyjq3n) returns 9 accounts

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 31 of 34 Specs in 1996.886 seconds
SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 3 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 35 lines ...
No journal files were found.
No journal files were found.
No journal files were found.
No journal files were found.
./scripts/../hack/log/log-dump.sh: line 93: TEST_WINDOWS: unbound variable
daemonset.apps "log-dump-node" deleted
Error from server (NotFound): error when deleting "./scripts/../hack/log/../../hack/log/log-dump-daemonset-windows.yaml": daemonsets.apps "log-dump-node-windows" not found
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-fyjq3n" deleted
kind delete cluster --name=capz || true
Deleting cluster "capz" ...
kind delete cluster --name=capz-e2e || true
... skipping 12 lines ...