Recent runs || View in Spyglass
PR | andyzhangx: fix: switch base image to fix CVEs |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h40m |
Revision | a093a52191d3e3d9e4045b29bef24ef84b1ddc4f |
Refs |
1704 |
job-version | v1.27.0-alpha.1.69+d7cb1c54a540c9 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.69+d7cb1c54a540c9 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 107 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 133k 0 --:--:-- --:--:-- --:--:-- 133k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 not found: manifest unknown: manifest tagged by "v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=8635ef7cb96ec669bd2a099af3b1437a19530391 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2023-01-28T16:06:36Z -extldflags "-static"" -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1079 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 865 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 216 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:14:18.654[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:14:18.654[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:14:18.725[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:14:18.725[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:14:18.796[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:14:18.797[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:14:18.867[0m Jan 28 16:14:18.867: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9qwdb" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 28 16:14:18.935: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 67.657369ms Jan 28 16:14:21.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135985003s Jan 28 16:14:23.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137228395s Jan 28 16:14:25.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136369621s Jan 28 16:14:27.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137453836s Jan 28 16:14:29.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137436961s ... skipping 14 lines ... Jan 28 16:14:59.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.13618275s Jan 28 16:15:01.002: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.135596571s Jan 28 16:15:03.002: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.13556013s Jan 28 16:15:05.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.137232003s Jan 28 16:15:07.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.135906176s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:15:07.003[0m Jan 28 16:15:07.003: INFO: Pod "azuredisk-volume-tester-9qwdb" satisfied condition "Succeeded or Failed" Jan 28 16:15:07.003: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-9qwdb" Jan 28 16:15:07.111: INFO: Pod azuredisk-volume-tester-9qwdb has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-9qwdb in namespace azuredisk-8081 [38;5;243m01/28/23 16:15:07.111[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:15:07.259[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:15:07.327[0m ... skipping 44 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:14:18.654[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:14:18.654[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:14:18.725[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:14:18.725[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:14:18.796[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:14:18.797[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:14:18.867[0m Jan 28 16:14:18.867: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9qwdb" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 28 16:14:18.935: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 67.657369ms Jan 28 16:14:21.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135985003s Jan 28 16:14:23.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137228395s Jan 28 16:14:25.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136369621s Jan 28 16:14:27.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137453836s Jan 28 16:14:29.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137436961s ... skipping 14 lines ... Jan 28 16:14:59.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.13618275s Jan 28 16:15:01.002: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.135596571s Jan 28 16:15:03.002: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.13556013s Jan 28 16:15:05.004: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.137232003s Jan 28 16:15:07.003: INFO: Pod "azuredisk-volume-tester-9qwdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.135906176s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:15:07.003[0m Jan 28 16:15:07.003: INFO: Pod "azuredisk-volume-tester-9qwdb" satisfied condition "Succeeded or Failed" Jan 28 16:15:07.003: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-9qwdb" Jan 28 16:15:07.111: INFO: Pod azuredisk-volume-tester-9qwdb has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-9qwdb in namespace azuredisk-8081 [38;5;243m01/28/23 16:15:07.111[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:15:07.259[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:15:07.327[0m ... skipping 39 lines ... Jan 28 16:15:51.489: INFO: PersistentVolumeClaim pvc-pkqmp found but phase is Pending instead of Bound. Jan 28 16:15:53.557: INFO: PersistentVolumeClaim pvc-pkqmp found and phase=Bound (4.203273776s) [1mSTEP:[0m checking the PVC [38;5;243m01/28/23 16:15:53.557[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:15:53.625[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:15:53.693[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:15:53.693[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/28/23 16:15:53.764[0m Jan 28 16:15:53.764: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pcj6p" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 28 16:15:53.831: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 67.461773ms Jan 28 16:15:55.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135534303s Jan 28 16:15:57.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135723804s Jan 28 16:15:59.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135494029s Jan 28 16:16:01.902: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137677446s Jan 28 16:16:03.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13625585s Jan 28 16:16:05.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.135868248s Jan 28 16:16:07.904: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.139820982s Jan 28 16:16:09.901: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.137240922s Jan 28 16:16:11.902: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1376452s Jan 28 16:16:13.901: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 20.137365232s Jan 28 16:16:15.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.136381072s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:16:15.9[0m Jan 28 16:16:15.901: INFO: Pod "azuredisk-volume-tester-pcj6p" satisfied condition "Succeeded or Failed" Jan 28 16:16:15.901: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-pcj6p" Jan 28 16:16:16.003: INFO: Pod azuredisk-volume-tester-pcj6p has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pcj6p in namespace azuredisk-2540 [38;5;243m01/28/23 16:16:16.003[0m Jan 28 16:16:16.082: INFO: deleting PVC "azuredisk-2540"/"pvc-pkqmp" Jan 28 16:16:16.082: INFO: Deleting PersistentVolumeClaim "pvc-pkqmp" ... skipping 38 lines ... Jan 28 16:15:51.489: INFO: PersistentVolumeClaim pvc-pkqmp found but phase is Pending instead of Bound. Jan 28 16:15:53.557: INFO: PersistentVolumeClaim pvc-pkqmp found and phase=Bound (4.203273776s) [1mSTEP:[0m checking the PVC [38;5;243m01/28/23 16:15:53.557[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:15:53.625[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:15:53.693[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:15:53.693[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/28/23 16:15:53.764[0m Jan 28 16:15:53.764: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pcj6p" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 28 16:15:53.831: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 67.461773ms Jan 28 16:15:55.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135534303s Jan 28 16:15:57.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135723804s Jan 28 16:15:59.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135494029s Jan 28 16:16:01.902: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137677446s Jan 28 16:16:03.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13625585s Jan 28 16:16:05.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.135868248s Jan 28 16:16:07.904: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.139820982s Jan 28 16:16:09.901: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.137240922s Jan 28 16:16:11.902: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1376452s Jan 28 16:16:13.901: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Pending", Reason="", readiness=false. Elapsed: 20.137365232s Jan 28 16:16:15.900: INFO: Pod "azuredisk-volume-tester-pcj6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.136381072s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:16:15.9[0m Jan 28 16:16:15.901: INFO: Pod "azuredisk-volume-tester-pcj6p" satisfied condition "Succeeded or Failed" Jan 28 16:16:15.901: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-pcj6p" Jan 28 16:16:16.003: INFO: Pod azuredisk-volume-tester-pcj6p has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pcj6p in namespace azuredisk-2540 [38;5;243m01/28/23 16:16:16.003[0m Jan 28 16:16:16.082: INFO: deleting PVC "azuredisk-2540"/"pvc-pkqmp" Jan 28 16:16:16.082: INFO: Deleting PersistentVolumeClaim "pvc-pkqmp" ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:16:57.96[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:16:57.96[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:16:58.029[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:16:58.029[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:16:58.106[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:16:58.107[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:16:58.176[0m Jan 28 16:16:58.176: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-bd8fw" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 28 16:16:58.243: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 67.358625ms Jan 28 16:17:00.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133409544s Jan 28 16:17:02.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13306053s Jan 28 16:17:04.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133498554s Jan 28 16:17:06.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134290231s Jan 28 16:17:08.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134264912s ... skipping 14 lines ... Jan 28 16:17:38.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 40.134067882s Jan 28 16:17:40.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 42.133028985s Jan 28 16:17:42.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 44.134594149s Jan 28 16:17:44.312: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 46.135963094s Jan 28 16:17:46.311: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.134846985s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:17:46.311[0m Jan 28 16:17:46.311: INFO: Pod "azuredisk-volume-tester-bd8fw" satisfied condition "Succeeded or Failed" Jan 28 16:17:46.311: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-bd8fw" Jan 28 16:17:46.421: INFO: Pod azuredisk-volume-tester-bd8fw has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-bd8fw in namespace azuredisk-4728 [38;5;243m01/28/23 16:17:46.421[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:17:46.563[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:17:46.628[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:16:57.96[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:16:57.96[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:16:58.029[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:16:58.029[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:16:58.106[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:16:58.107[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:16:58.176[0m Jan 28 16:16:58.176: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-bd8fw" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 28 16:16:58.243: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 67.358625ms Jan 28 16:17:00.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133409544s Jan 28 16:17:02.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13306053s Jan 28 16:17:04.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133498554s Jan 28 16:17:06.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134290231s Jan 28 16:17:08.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134264912s ... skipping 14 lines ... Jan 28 16:17:38.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 40.134067882s Jan 28 16:17:40.309: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 42.133028985s Jan 28 16:17:42.310: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 44.134594149s Jan 28 16:17:44.312: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 46.135963094s Jan 28 16:17:46.311: INFO: Pod "azuredisk-volume-tester-bd8fw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.134846985s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:17:46.311[0m Jan 28 16:17:46.311: INFO: Pod "azuredisk-volume-tester-bd8fw" satisfied condition "Succeeded or Failed" Jan 28 16:17:46.311: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-bd8fw" Jan 28 16:17:46.421: INFO: Pod azuredisk-volume-tester-bd8fw has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-bd8fw in namespace azuredisk-4728 [38;5;243m01/28/23 16:17:46.421[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:17:46.563[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:17:46.628[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:18:28.515[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:18:28.515[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:18:28.584[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:18:28.585[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/28/23 16:18:28.654[0m Jan 28 16:18:50.779: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-6fvc9" Jan 28 16:18:50.842: INFO: Error getting logs for pod azuredisk-volume-tester-6fvc9: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-6fvc9) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-6fvc9 in namespace azuredisk-5466 [38;5;243m01/28/23 16:18:50.842[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:18:50.96[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:18:51.018[0m Jan 28 16:18:51.018: INFO: deleting PVC "azuredisk-5466"/"pvc-snzzx" Jan 28 16:18:51.018: INFO: Deleting PersistentVolumeClaim "pvc-snzzx" [1mSTEP:[0m waiting for claim's PV "pvc-798ba9b7-0290-4714-99fa-51a1ed445c25" to be deleted [38;5;243m01/28/23 16:18:51.079[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:18:28.515[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:18:28.515[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:18:28.584[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:18:28.585[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/28/23 16:18:28.654[0m Jan 28 16:18:50.779: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-6fvc9" Jan 28 16:18:50.842: INFO: Error getting logs for pod azuredisk-volume-tester-6fvc9: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-6fvc9) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-6fvc9 in namespace azuredisk-5466 [38;5;243m01/28/23 16:18:50.842[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:18:50.96[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:18:51.018[0m Jan 28 16:18:51.018: INFO: deleting PVC "azuredisk-5466"/"pvc-snzzx" Jan 28 16:18:51.018: INFO: Deleting PersistentVolumeClaim "pvc-snzzx" [1mSTEP:[0m waiting for claim's PV "pvc-798ba9b7-0290-4714-99fa-51a1ed445c25" to be deleted [38;5;243m01/28/23 16:18:51.079[0m ... skipping 31 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:19:42.819[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:19:42.819[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:19:42.879[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:19:42.88[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:19:42.946[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:19:42.946[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:19:43.006[0m Jan 28 16:19:43.006: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-whjhf" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 28 16:19:43.064: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 58.069952ms Jan 28 16:19:45.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116656831s Jan 28 16:19:47.125: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118563022s Jan 28 16:19:49.122: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116158265s Jan 28 16:19:51.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116779584s Jan 28 16:19:53.122: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116442026s ... skipping 3 lines ... Jan 28 16:20:01.122: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116507853s Jan 28 16:20:03.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116691616s Jan 28 16:20:05.124: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118158632s Jan 28 16:20:07.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.11734261s Jan 28 16:20:09.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.116884817s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:20:09.123[0m Jan 28 16:20:09.123: INFO: Pod "azuredisk-volume-tester-whjhf" satisfied condition "Succeeded or Failed" Jan 28 16:20:09.123: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-whjhf" Jan 28 16:20:09.190: INFO: Pod azuredisk-volume-tester-whjhf has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-whjhf in namespace azuredisk-2790 [38;5;243m01/28/23 16:20:09.19[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:20:09.314[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:20:09.372[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:19:42.819[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:19:42.819[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:19:42.879[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:19:42.88[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:19:42.946[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:19:42.946[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:19:43.006[0m Jan 28 16:19:43.006: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-whjhf" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 28 16:19:43.064: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 58.069952ms Jan 28 16:19:45.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116656831s Jan 28 16:19:47.125: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118563022s Jan 28 16:19:49.122: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116158265s Jan 28 16:19:51.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116779584s Jan 28 16:19:53.122: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116442026s ... skipping 3 lines ... Jan 28 16:20:01.122: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116507853s Jan 28 16:20:03.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116691616s Jan 28 16:20:05.124: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118158632s Jan 28 16:20:07.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.11734261s Jan 28 16:20:09.123: INFO: Pod "azuredisk-volume-tester-whjhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.116884817s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:20:09.123[0m Jan 28 16:20:09.123: INFO: Pod "azuredisk-volume-tester-whjhf" satisfied condition "Succeeded or Failed" Jan 28 16:20:09.123: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-whjhf" Jan 28 16:20:09.190: INFO: Pod azuredisk-volume-tester-whjhf has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-whjhf in namespace azuredisk-2790 [38;5;243m01/28/23 16:20:09.19[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:20:09.314[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:20:09.372[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 16:20:52.81[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:20:52.811[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:20:52.811[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:20:52.872[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:20:52.872[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:20:52.942[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:20:53.002[0m Jan 28 16:20:53.002: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gq7fx" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 28 16:20:53.061: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 58.435364ms Jan 28 16:20:55.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114321791s Jan 28 16:20:57.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114491093s Jan 28 16:20:59.118: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115296333s Jan 28 16:21:01.116: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113993499s Jan 28 16:21:03.115: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112981752s ... skipping 2 lines ... Jan 28 16:21:09.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.114474549s Jan 28 16:21:11.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114903621s Jan 28 16:21:13.115: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 20.113038887s Jan 28 16:21:15.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11429898s Jan 28 16:21:17.115: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113110152s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:21:17.115[0m Jan 28 16:21:17.116: INFO: Pod "azuredisk-volume-tester-gq7fx" satisfied condition "Succeeded or Failed" Jan 28 16:21:17.116: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-gq7fx" Jan 28 16:21:17.172: INFO: Pod azuredisk-volume-tester-gq7fx has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gq7fx in namespace azuredisk-5356 [38;5;243m01/28/23 16:21:17.172[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:21:17.289[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:21:17.344[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 16:20:52.81[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:20:52.811[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:20:52.811[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:20:52.872[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:20:52.872[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:20:52.942[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:20:53.002[0m Jan 28 16:20:53.002: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gq7fx" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 28 16:20:53.061: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 58.435364ms Jan 28 16:20:55.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114321791s Jan 28 16:20:57.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114491093s Jan 28 16:20:59.118: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115296333s Jan 28 16:21:01.116: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113993499s Jan 28 16:21:03.115: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112981752s ... skipping 2 lines ... Jan 28 16:21:09.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.114474549s Jan 28 16:21:11.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114903621s Jan 28 16:21:13.115: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 20.113038887s Jan 28 16:21:15.117: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11429898s Jan 28 16:21:17.115: INFO: Pod "azuredisk-volume-tester-gq7fx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113110152s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:21:17.115[0m Jan 28 16:21:17.116: INFO: Pod "azuredisk-volume-tester-gq7fx" satisfied condition "Succeeded or Failed" Jan 28 16:21:17.116: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-gq7fx" Jan 28 16:21:17.172: INFO: Pod azuredisk-volume-tester-gq7fx has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gq7fx in namespace azuredisk-5356 [38;5;243m01/28/23 16:21:17.172[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:21:17.289[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:21:17.344[0m ... skipping 44 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 16:22:16.211[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:22:16.211[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:22:16.211[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:22:16.267[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:22:16.268[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:22:16.322[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:22:16.38[0m Jan 28 16:22:16.380: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xqhbb" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 28 16:22:16.439: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 58.82336ms Jan 28 16:22:18.494: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113621792s Jan 28 16:22:20.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115171974s Jan 28 16:22:22.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114533219s Jan 28 16:22:24.498: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118297772s Jan 28 16:22:26.496: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11567454s ... skipping 9 lines ... Jan 28 16:22:46.497: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.116736329s Jan 28 16:22:48.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.114915235s Jan 28 16:22:50.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.115414799s Jan 28 16:22:52.494: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.113585437s Jan 28 16:22:54.494: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.113756486s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:22:54.494[0m Jan 28 16:22:54.494: INFO: Pod "azuredisk-volume-tester-xqhbb" satisfied condition "Succeeded or Failed" Jan 28 16:22:54.494: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-xqhbb" Jan 28 16:22:54.580: INFO: Pod azuredisk-volume-tester-xqhbb has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xqhbb in namespace azuredisk-5194 [38;5;243m01/28/23 16:22:54.58[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:22:54.697[0m ... skipping 63 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 16:22:16.211[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:22:16.211[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:22:16.211[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:22:16.267[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:22:16.268[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:22:16.322[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:22:16.38[0m Jan 28 16:22:16.380: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xqhbb" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 28 16:22:16.439: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 58.82336ms Jan 28 16:22:18.494: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113621792s Jan 28 16:22:20.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115171974s Jan 28 16:22:22.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114533219s Jan 28 16:22:24.498: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118297772s Jan 28 16:22:26.496: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11567454s ... skipping 9 lines ... Jan 28 16:22:46.497: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.116736329s Jan 28 16:22:48.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.114915235s Jan 28 16:22:50.495: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.115414799s Jan 28 16:22:52.494: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.113585437s Jan 28 16:22:54.494: INFO: Pod "azuredisk-volume-tester-xqhbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.113756486s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 16:22:54.494[0m Jan 28 16:22:54.494: INFO: Pod "azuredisk-volume-tester-xqhbb" satisfied condition "Succeeded or Failed" Jan 28 16:22:54.494: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-xqhbb" Jan 28 16:22:54.580: INFO: Pod azuredisk-volume-tester-xqhbb has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xqhbb in namespace azuredisk-5194 [38;5;243m01/28/23 16:22:54.58[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:22:54.697[0m ... skipping 53 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:25:50.289[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:25:50.289[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:25:50.345[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:25:50.345[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:25:50.406[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:25:50.406[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/28/23 16:25:50.464[0m Jan 28 16:25:50.464: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-klrd5" in namespace "azuredisk-1353" to be "Error status code" Jan 28 16:25:50.520: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 55.396111ms Jan 28 16:25:52.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112208596s Jan 28 16:25:54.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111498986s Jan 28 16:25:56.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111621732s Jan 28 16:25:58.575: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110894475s Jan 28 16:26:00.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112754435s ... skipping 24 lines ... Jan 28 16:26:50.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.112621543s Jan 28 16:26:52.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.111569436s Jan 28 16:26:54.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.112401314s Jan 28 16:26:56.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.112677266s Jan 28 16:26:58.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.111946158s Jan 28 16:27:00.578: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.11344733s Jan 28 16:27:02.575: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.111074527s [1mSTEP:[0m Saw pod failure [38;5;243m01/28/23 16:27:02.575[0m Jan 28 16:27:02.576: INFO: Pod "azuredisk-volume-tester-klrd5" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/28/23 16:27:02.576[0m Jan 28 16:27:02.666: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-klrd5" Jan 28 16:27:02.724: INFO: Pod azuredisk-volume-tester-klrd5 has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-klrd5 in namespace azuredisk-1353 [38;5;243m01/28/23 16:27:02.724[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:27:02.84[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:25:50.289[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:25:50.289[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:25:50.345[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:25:50.345[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:25:50.406[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:25:50.406[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/28/23 16:25:50.464[0m Jan 28 16:25:50.464: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-klrd5" in namespace "azuredisk-1353" to be "Error status code" Jan 28 16:25:50.520: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 55.396111ms Jan 28 16:25:52.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112208596s Jan 28 16:25:54.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111498986s Jan 28 16:25:56.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111621732s Jan 28 16:25:58.575: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110894475s Jan 28 16:26:00.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112754435s ... skipping 24 lines ... Jan 28 16:26:50.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.112621543s Jan 28 16:26:52.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.111569436s Jan 28 16:26:54.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.112401314s Jan 28 16:26:56.577: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.112677266s Jan 28 16:26:58.576: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.111946158s Jan 28 16:27:00.578: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.11344733s Jan 28 16:27:02.575: INFO: Pod "azuredisk-volume-tester-klrd5": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.111074527s [1mSTEP:[0m Saw pod failure [38;5;243m01/28/23 16:27:02.575[0m Jan 28 16:27:02.576: INFO: Pod "azuredisk-volume-tester-klrd5" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/28/23 16:27:02.576[0m Jan 28 16:27:02.666: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-klrd5" Jan 28 16:27:02.724: INFO: Pod azuredisk-volume-tester-klrd5 has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-klrd5 in namespace azuredisk-1353 [38;5;243m01/28/23 16:27:02.724[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:27:02.84[0m ... skipping 669 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:35:09.059[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:35:09.06[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:35:09.118[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:35:09.118[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:35:09.176[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:35:09.176[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:35:09.233[0m Jan 28 16:35:09.233: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pds8p" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 28 16:35:09.288: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 55.592231ms Jan 28 16:35:11.343: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109768472s Jan 28 16:35:13.372: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138740983s Jan 28 16:35:15.345: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112408906s Jan 28 16:35:17.344: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111434606s Jan 28 16:35:19.344: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111378686s ... skipping 440 lines ... Jan 28 16:50:01.345: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.112533889s Jan 28 16:50:03.346: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.113279931s Jan 28 16:50:05.345: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.111630218s Jan 28 16:50:07.348: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.115406642s Jan 28 16:50:09.346: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.11296674s Jan 28 16:50:09.401: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.16760061s Jan 28 16:50:09.402: INFO: Unexpected error: <*pod.timeoutError | 0xc000daed80>: { msg: "timed out while waiting for pod azuredisk-59/azuredisk-volume-tester-pds8p to be Succeeded or Failed", observedObjects: [ <*v1.Pod | 0xc001005680>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "azuredisk-volume-tester-pds8p", GenerateName: "azuredisk-volume-tester-", ... skipping 138 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 28 16:50:09.402: FAIL: timed out while waiting for pod azuredisk-59/azuredisk-volume-tester-pds8p to be Succeeded or Failed Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeCloningTest).Run(0xc000a19c28, {0x270dda0, 0xc000209d40}, 0x6?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_cloning_tester.go:55 +0x237 sigs.k8s.io/azuredisk-csi-driver/test/e2e.(*dynamicProvisioningTestSuite).defineTests.func14() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:569 +0x55a Jan 28 16:50:09.402: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-pds8p" Jan 28 16:50:09.496: INFO: Error getting logs for pod azuredisk-volume-tester-pds8p: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-pds8p) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pds8p in namespace azuredisk-59 [38;5;243m01/28/23 16:50:09.496[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:50:09.607[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:50:09.661[0m Jan 28 16:50:09.661: INFO: deleting PVC "azuredisk-59"/"pvc-b6lbd" Jan 28 16:50:09.661: INFO: Deleting PersistentVolumeClaim "pvc-b6lbd" [1mSTEP:[0m waiting for claim's PV "pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7" to be deleted [38;5;243m01/28/23 16:50:09.718[0m ... skipping 10 lines ... Jan 28 16:50:50.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed Jan 28 16:50:50.278: INFO: Claim "azuredisk-59" in namespace "pvc-b6lbd" doesn't exist in the system Jan 28 16:50:50.278: INFO: deleting StorageClass azuredisk-59-disk.csi.azure.com-dynamic-sc-22g7q [1mSTEP:[0m dump namespace information after failure [38;5;243m01/28/23 16:50:50.334[0m [1mSTEP:[0m Destroying namespace "azuredisk-59" for this suite. [38;5;243m01/28/23 16:50:50.334[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [942.161 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should clone a volume from an existing volume and read from it [disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:539[0m ... skipping 8 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:35:09.059[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:35:09.06[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:35:09.118[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:35:09.118[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:35:09.176[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:35:09.176[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:35:09.233[0m Jan 28 16:35:09.233: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pds8p" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 28 16:35:09.288: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 55.592231ms Jan 28 16:35:11.343: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109768472s Jan 28 16:35:13.372: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138740983s Jan 28 16:35:15.345: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112408906s Jan 28 16:35:17.344: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111434606s Jan 28 16:35:19.344: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111378686s ... skipping 440 lines ... Jan 28 16:50:01.345: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.112533889s Jan 28 16:50:03.346: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.113279931s Jan 28 16:50:05.345: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.111630218s Jan 28 16:50:07.348: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.115406642s Jan 28 16:50:09.346: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.11296674s Jan 28 16:50:09.401: INFO: Pod "azuredisk-volume-tester-pds8p": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.16760061s Jan 28 16:50:09.402: INFO: Unexpected error: <*pod.timeoutError | 0xc000daed80>: { msg: "timed out while waiting for pod azuredisk-59/azuredisk-volume-tester-pds8p to be Succeeded or Failed", observedObjects: [ <*v1.Pod | 0xc001005680>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "azuredisk-volume-tester-pds8p", GenerateName: "azuredisk-volume-tester-", ... skipping 138 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 28 16:50:09.402: FAIL: timed out while waiting for pod azuredisk-59/azuredisk-volume-tester-pds8p to be Succeeded or Failed Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeCloningTest).Run(0xc000a19c28, {0x270dda0, 0xc000209d40}, 0x6?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_cloning_tester.go:55 +0x237 sigs.k8s.io/azuredisk-csi-driver/test/e2e.(*dynamicProvisioningTestSuite).defineTests.func14() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:569 +0x55a Jan 28 16:50:09.402: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-pds8p" Jan 28 16:50:09.496: INFO: Error getting logs for pod azuredisk-volume-tester-pds8p: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-pds8p) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pds8p in namespace azuredisk-59 [38;5;243m01/28/23 16:50:09.496[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 16:50:09.607[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 16:50:09.661[0m Jan 28 16:50:09.661: INFO: deleting PVC "azuredisk-59"/"pvc-b6lbd" Jan 28 16:50:09.661: INFO: Deleting PersistentVolumeClaim "pvc-b6lbd" [1mSTEP:[0m waiting for claim's PV "pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7" to be deleted [38;5;243m01/28/23 16:50:09.718[0m ... skipping 11 lines ... Jan 28 16:50:50.278: INFO: Claim "azuredisk-59" in namespace "pvc-b6lbd" doesn't exist in the system Jan 28 16:50:50.278: INFO: deleting StorageClass azuredisk-59-disk.csi.azure.com-dynamic-sc-22g7q [1mSTEP:[0m dump namespace information after failure [38;5;243m01/28/23 16:50:50.334[0m [1mSTEP:[0m Destroying namespace "azuredisk-59" for this suite. [38;5;243m01/28/23 16:50:50.334[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 28 16:50:09.402: timed out while waiting for pod azuredisk-59/azuredisk-volume-tester-pds8p to be Succeeded or Failed[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002123c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 18 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:50:51.258[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:50:51.258[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:50:51.314[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:50:51.314[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:50:51.372[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:50:51.372[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:50:51.428[0m Jan 28 16:50:51.428: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9zrv6" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 28 16:50:51.482: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 53.98818ms Jan 28 16:50:53.540: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111314854s Jan 28 16:50:55.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109168303s Jan 28 16:50:57.539: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110177391s Jan 28 16:50:59.539: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110108826s Jan 28 16:51:01.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109969875s ... skipping 440 lines ... Jan 28 17:05:43.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.109963629s Jan 28 17:05:45.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.109340475s Jan 28 17:05:47.537: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.108797605s Jan 28 17:05:49.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.109141808s Jan 28 17:05:51.537: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.108373372s Jan 28 17:05:51.591: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.162877597s Jan 28 17:05:51.592: INFO: Unexpected error: <*pod.timeoutError | 0xc000b0d6b0>: { msg: "timed out while waiting for pod azuredisk-2546/azuredisk-volume-tester-9zrv6 to be Succeeded or Failed", observedObjects: [ <*v1.Pod | 0xc0008a7b00>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "azuredisk-volume-tester-9zrv6", GenerateName: "azuredisk-volume-tester-", ... skipping 139 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 28 17:05:51.592: FAIL: timed out while waiting for pod azuredisk-2546/azuredisk-volume-tester-9zrv6 to be Succeeded or Failed Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeCloningTest).Run(0xc000a19c28, {0x270dda0, 0xc0001896c0}, 0x13?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_cloning_tester.go:55 +0x237 sigs.k8s.io/azuredisk-csi-driver/test/e2e.(*dynamicProvisioningTestSuite).defineTests.func15() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:612 +0x696 Jan 28 17:05:51.593: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-9zrv6" Jan 28 17:05:51.694: INFO: Error getting logs for pod azuredisk-volume-tester-9zrv6: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-9zrv6) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-9zrv6 in namespace azuredisk-2546 [38;5;243m01/28/23 17:05:51.694[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:05:51.806[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:05:51.86[0m Jan 28 17:05:51.860: INFO: deleting PVC "azuredisk-2546"/"pvc-qw4d2" Jan 28 17:05:51.860: INFO: Deleting PersistentVolumeClaim "pvc-qw4d2" [1mSTEP:[0m waiting for claim's PV "pvc-01ba0080-221a-4049-871f-6c10509a024d" to be deleted [38;5;243m01/28/23 17:05:51.917[0m ... skipping 12 lines ... Jan 28 17:06:42.564: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed Jan 28 17:06:42.619: INFO: Claim "azuredisk-2546" in namespace "pvc-qw4d2" doesn't exist in the system Jan 28 17:06:42.619: INFO: deleting StorageClass azuredisk-2546-disk.csi.azure.com-dynamic-sc-knl4k [1mSTEP:[0m dump namespace information after failure [38;5;243m01/28/23 17:06:42.677[0m [1mSTEP:[0m Destroying namespace "azuredisk-2546" for this suite. [38;5;243m01/28/23 17:06:42.677[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [952.343 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should clone a volume of larger size than the source volume and make sure the filesystem is appropriately adjusted [disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:572[0m ... skipping 8 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 16:50:51.258[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 16:50:51.258[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 16:50:51.314[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 16:50:51.314[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 16:50:51.372[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 16:50:51.372[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 16:50:51.428[0m Jan 28 16:50:51.428: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9zrv6" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 28 16:50:51.482: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 53.98818ms Jan 28 16:50:53.540: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111314854s Jan 28 16:50:55.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109168303s Jan 28 16:50:57.539: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110177391s Jan 28 16:50:59.539: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110108826s Jan 28 16:51:01.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109969875s ... skipping 440 lines ... Jan 28 17:05:43.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.109963629s Jan 28 17:05:45.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.109340475s Jan 28 17:05:47.537: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.108797605s Jan 28 17:05:49.538: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.109141808s Jan 28 17:05:51.537: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.108373372s Jan 28 17:05:51.591: INFO: Pod "azuredisk-volume-tester-9zrv6": Phase="Pending", Reason="", readiness=false. Elapsed: 15m0.162877597s Jan 28 17:05:51.592: INFO: Unexpected error: <*pod.timeoutError | 0xc000b0d6b0>: { msg: "timed out while waiting for pod azuredisk-2546/azuredisk-volume-tester-9zrv6 to be Succeeded or Failed", observedObjects: [ <*v1.Pod | 0xc0008a7b00>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "azuredisk-volume-tester-9zrv6", GenerateName: "azuredisk-volume-tester-", ... skipping 139 lines ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 28 17:05:51.592: FAIL: timed out while waiting for pod azuredisk-2546/azuredisk-volume-tester-9zrv6 to be Succeeded or Failed Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeCloningTest).Run(0xc000a19c28, {0x270dda0, 0xc0001896c0}, 0x13?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_cloning_tester.go:55 +0x237 sigs.k8s.io/azuredisk-csi-driver/test/e2e.(*dynamicProvisioningTestSuite).defineTests.func15() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:612 +0x696 Jan 28 17:05:51.593: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-9zrv6" Jan 28 17:05:51.694: INFO: Error getting logs for pod azuredisk-volume-tester-9zrv6: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-9zrv6) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-9zrv6 in namespace azuredisk-2546 [38;5;243m01/28/23 17:05:51.694[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:05:51.806[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:05:51.86[0m Jan 28 17:05:51.860: INFO: deleting PVC "azuredisk-2546"/"pvc-qw4d2" Jan 28 17:05:51.860: INFO: Deleting PersistentVolumeClaim "pvc-qw4d2" [1mSTEP:[0m waiting for claim's PV "pvc-01ba0080-221a-4049-871f-6c10509a024d" to be deleted [38;5;243m01/28/23 17:05:51.917[0m ... skipping 13 lines ... Jan 28 17:06:42.619: INFO: Claim "azuredisk-2546" in namespace "pvc-qw4d2" doesn't exist in the system Jan 28 17:06:42.619: INFO: deleting StorageClass azuredisk-2546-disk.csi.azure.com-dynamic-sc-knl4k [1mSTEP:[0m dump namespace information after failure [38;5;243m01/28/23 17:06:42.677[0m [1mSTEP:[0m Destroying namespace "azuredisk-2546" for this suite. [38;5;243m01/28/23 17:06:42.677[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 28 17:05:51.592: timed out while waiting for pod azuredisk-2546/azuredisk-volume-tester-9zrv6 to be Succeeded or Failed[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002123c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 28 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:06:43.841[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:06:43.841[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:06:43.899[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:06:43.899[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:06:43.957[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:06:43.957[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:06:44.013[0m Jan 28 17:06:44.013: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dvl4k" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 28 17:06:44.068: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 54.694933ms Jan 28 17:06:46.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111560367s Jan 28 17:06:48.123: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110057789s Jan 28 17:06:50.124: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110452222s Jan 28 17:06:52.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111511055s Jan 28 17:06:54.127: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113455587s ... skipping 9 lines ... Jan 28 17:07:14.124: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 30.110446753s Jan 28 17:07:16.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 32.111254157s Jan 28 17:07:18.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 34.111894555s Jan 28 17:07:20.126: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 36.112499681s Jan 28 17:07:22.124: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.110768457s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:07:22.124[0m Jan 28 17:07:22.124: INFO: Pod "azuredisk-volume-tester-dvl4k" satisfied condition "Succeeded or Failed" Jan 28 17:07:22.124: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-dvl4k" Jan 28 17:07:22.210: INFO: Pod azuredisk-volume-tester-dvl4k has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-dvl4k in namespace azuredisk-1598 [38;5;243m01/28/23 17:07:22.21[0m ... skipping 69 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:06:43.841[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:06:43.841[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:06:43.899[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:06:43.899[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:06:43.957[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:06:43.957[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:06:44.013[0m Jan 28 17:06:44.013: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dvl4k" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 28 17:06:44.068: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 54.694933ms Jan 28 17:06:46.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111560367s Jan 28 17:06:48.123: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110057789s Jan 28 17:06:50.124: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110452222s Jan 28 17:06:52.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111511055s Jan 28 17:06:54.127: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113455587s ... skipping 9 lines ... Jan 28 17:07:14.124: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 30.110446753s Jan 28 17:07:16.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 32.111254157s Jan 28 17:07:18.125: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 34.111894555s Jan 28 17:07:20.126: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Pending", Reason="", readiness=false. Elapsed: 36.112499681s Jan 28 17:07:22.124: INFO: Pod "azuredisk-volume-tester-dvl4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.110768457s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:07:22.124[0m Jan 28 17:07:22.124: INFO: Pod "azuredisk-volume-tester-dvl4k" satisfied condition "Succeeded or Failed" Jan 28 17:07:22.124: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-dvl4k" Jan 28 17:07:22.210: INFO: Pod azuredisk-volume-tester-dvl4k has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-dvl4k in namespace azuredisk-1598 [38;5;243m01/28/23 17:07:22.21[0m ... skipping 63 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:08:25.04[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:08:25.04[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:08:25.096[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:08:25.096[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:08:25.152[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:08:25.152[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:08:25.21[0m Jan 28 17:08:25.210: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-q5sf9" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 28 17:08:25.264: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.899763ms Jan 28 17:08:27.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111062852s Jan 28 17:08:29.322: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112345708s Jan 28 17:08:31.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110901472s Jan 28 17:08:33.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111311565s Jan 28 17:08:35.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109527302s ... skipping 10 lines ... Jan 28 17:08:57.319: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.109246935s Jan 28 17:08:59.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.110196638s Jan 28 17:09:01.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.109797657s Jan 28 17:09:03.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Running", Reason="", readiness=true. Elapsed: 38.110663576s Jan 28 17:09:05.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.109638306s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:09:05.32[0m Jan 28 17:09:05.320: INFO: Pod "azuredisk-volume-tester-q5sf9" satisfied condition "Succeeded or Failed" Jan 28 17:09:05.320: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-q5sf9" Jan 28 17:09:05.406: INFO: Pod azuredisk-volume-tester-q5sf9 has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.073710 seconds, 1.3GB/s hello world ... skipping 59 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:08:25.04[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:08:25.04[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:08:25.096[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:08:25.096[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:08:25.152[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:08:25.152[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:08:25.21[0m Jan 28 17:08:25.210: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-q5sf9" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 28 17:08:25.264: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 53.899763ms Jan 28 17:08:27.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111062852s Jan 28 17:08:29.322: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112345708s Jan 28 17:08:31.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110901472s Jan 28 17:08:33.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111311565s Jan 28 17:08:35.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109527302s ... skipping 10 lines ... Jan 28 17:08:57.319: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.109246935s Jan 28 17:08:59.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.110196638s Jan 28 17:09:01.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.109797657s Jan 28 17:09:03.321: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Running", Reason="", readiness=true. Elapsed: 38.110663576s Jan 28 17:09:05.320: INFO: Pod "azuredisk-volume-tester-q5sf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.109638306s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:09:05.32[0m Jan 28 17:09:05.320: INFO: Pod "azuredisk-volume-tester-q5sf9" satisfied condition "Succeeded or Failed" Jan 28 17:09:05.320: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-q5sf9" Jan 28 17:09:05.406: INFO: Pod azuredisk-volume-tester-q5sf9 has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.073710 seconds, 1.3GB/s hello world ... skipping 52 lines ... Jan 28 17:10:27.995: INFO: >>> kubeConfig: /root/tmp3797534717/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:10:27.996[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:10:27.996[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:10:28.057[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:10:28.057[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:10:28.119[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:10:28.175[0m Jan 28 17:10:28.175: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-n8g7v" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 28 17:10:28.229: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 53.879604ms Jan 28 17:10:30.286: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110487946s Jan 28 17:10:32.288: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112723784s Jan 28 17:10:34.288: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112916509s Jan 28 17:10:36.286: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110819437s Jan 28 17:10:38.287: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111322783s ... skipping 2 lines ... Jan 28 17:10:44.284: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108623985s Jan 28 17:10:46.293: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117922644s Jan 28 17:10:48.285: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 20.110074618s Jan 28 17:10:50.286: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110572689s Jan 28 17:10:52.288: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.112453967s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:10:52.288[0m Jan 28 17:10:52.288: INFO: Pod "azuredisk-volume-tester-n8g7v" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/28/23 17:10:52.288[0m 2023/01/28 17:10:52 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/28 17:10:52 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-g59foizt [38;5;243m01/28/23 17:10:52.289[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-b803714a-9f2e-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:10:52.289[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-b803714a-9f2e-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:10:53.877[0m ... skipping 5 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:11:09.075[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:11:09.075[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:11:09.132[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:11:09.132[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:11:09.192[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/28/23 17:11:09.192[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:11:09.247[0m Jan 28 17:11:09.247: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xchvd" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 28 17:11:09.301: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.480122ms Jan 28 17:11:11.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109598087s Jan 28 17:11:13.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109648565s Jan 28 17:11:15.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109837487s Jan 28 17:11:17.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110072624s Jan 28 17:11:19.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11070097s ... skipping 2 lines ... Jan 28 17:11:25.359: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.111727136s Jan 28 17:11:27.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.110621236s Jan 28 17:11:29.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11118805s Jan 28 17:11:31.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110953921s Jan 28 17:11:33.356: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.109319786s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:11:33.356[0m Jan 28 17:11:33.356: INFO: Pod "azuredisk-volume-tester-xchvd" satisfied condition "Succeeded or Failed" Jan 28 17:11:33.356: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-xchvd" Jan 28 17:11:33.452: INFO: Pod azuredisk-volume-tester-xchvd has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xchvd in namespace azuredisk-8582 [38;5;243m01/28/23 17:11:33.452[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:11:33.567[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:11:33.622[0m ... skipping 50 lines ... Jan 28 17:10:27.995: INFO: >>> kubeConfig: /root/tmp3797534717/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:10:27.996[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:10:27.996[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:10:28.057[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:10:28.057[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:10:28.119[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:10:28.175[0m Jan 28 17:10:28.175: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-n8g7v" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 28 17:10:28.229: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 53.879604ms Jan 28 17:10:30.286: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110487946s Jan 28 17:10:32.288: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112723784s Jan 28 17:10:34.288: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112916509s Jan 28 17:10:36.286: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110819437s Jan 28 17:10:38.287: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111322783s ... skipping 2 lines ... Jan 28 17:10:44.284: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108623985s Jan 28 17:10:46.293: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117922644s Jan 28 17:10:48.285: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 20.110074618s Jan 28 17:10:50.286: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110572689s Jan 28 17:10:52.288: INFO: Pod "azuredisk-volume-tester-n8g7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.112453967s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:10:52.288[0m Jan 28 17:10:52.288: INFO: Pod "azuredisk-volume-tester-n8g7v" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/28/23 17:10:52.288[0m [1mSTEP:[0m Prow test resource group: kubetest-g59foizt [38;5;243m01/28/23 17:10:52.289[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-b803714a-9f2e-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:10:52.289[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-b803714a-9f2e-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:10:53.877[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/28/23 17:10:53.877[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/28/23 17:10:53.877[0m ... skipping 3 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:11:09.075[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:11:09.075[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:11:09.132[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:11:09.132[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:11:09.192[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/28/23 17:11:09.192[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:11:09.247[0m Jan 28 17:11:09.247: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xchvd" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 28 17:11:09.301: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.480122ms Jan 28 17:11:11.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109598087s Jan 28 17:11:13.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109648565s Jan 28 17:11:15.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109837487s Jan 28 17:11:17.357: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110072624s Jan 28 17:11:19.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11070097s ... skipping 2 lines ... Jan 28 17:11:25.359: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.111727136s Jan 28 17:11:27.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.110621236s Jan 28 17:11:29.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11118805s Jan 28 17:11:31.358: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110953921s Jan 28 17:11:33.356: INFO: Pod "azuredisk-volume-tester-xchvd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.109319786s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:11:33.356[0m Jan 28 17:11:33.356: INFO: Pod "azuredisk-volume-tester-xchvd" satisfied condition "Succeeded or Failed" Jan 28 17:11:33.356: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-xchvd" Jan 28 17:11:33.452: INFO: Pod azuredisk-volume-tester-xchvd has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xchvd in namespace azuredisk-8582 [38;5;243m01/28/23 17:11:33.452[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:11:33.567[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:11:33.622[0m ... skipping 49 lines ... Jan 28 17:13:42.799: INFO: >>> kubeConfig: /root/tmp3797534717/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:13:42.8[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:13:42.8[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:13:42.857[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:13:42.857[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:13:42.915[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:13:42.971[0m Jan 28 17:13:42.971: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-v5twk" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 28 17:13:43.026: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 55.428539ms Jan 28 17:13:45.083: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11169452s Jan 28 17:13:47.095: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123985161s Jan 28 17:13:49.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11126795s Jan 28 17:13:51.083: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112082121s Jan 28 17:13:53.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111257428s ... skipping 2 lines ... Jan 28 17:13:59.084: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.113301866s Jan 28 17:14:01.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.111599681s Jan 28 17:14:03.086: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115431709s Jan 28 17:14:05.081: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110244077s Jan 28 17:14:07.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.111427161s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:14:07.082[0m Jan 28 17:14:07.083: INFO: Pod "azuredisk-volume-tester-v5twk" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/28/23 17:14:07.083[0m 2023/01/28 17:14:07 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/28 17:14:07 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-g59foizt [38;5;243m01/28/23 17:14:07.083[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-2c1eb792-9f2f-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:14:07.084[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-2c1eb792-9f2f-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:14:07.91[0m ... skipping 12 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:14:25.268[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:14:25.324[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:14:25.324[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:14:25.384[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/28/23 17:14:25.385[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/28/23 17:14:25.385[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:14:25.441[0m Jan 28 17:14:25.441: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-928j5" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 28 17:14:25.495: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.39609ms Jan 28 17:14:27.554: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11304283s Jan 28 17:14:29.550: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109504955s Jan 28 17:14:31.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111270806s Jan 28 17:14:33.554: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112693777s Jan 28 17:14:35.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111533784s Jan 28 17:14:37.551: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.110148153s Jan 28 17:14:39.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.111414488s Jan 28 17:14:41.551: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.110144143s Jan 28 17:14:43.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.111105762s Jan 28 17:14:45.553: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11187302s Jan 28 17:14:47.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.111203094s Jan 28 17:14:49.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Failed", Reason="", readiness=false. Elapsed: 24.111442686s Jan 28 17:14:49.553: INFO: Unexpected error: <*fmt.wrapError | 0xc000bc59a0>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-928j5 to be Succeeded or Failed: pod \"azuredisk-volume-tester-928j5\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc0004e0d80>{ s: "pod \"azuredisk-volume-tester-928j5\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 28 17:14:49.553: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-928j5 to be Succeeded or Failed: pod "azuredisk-volume-tester-928j5" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000bb9d78, {0x270dda0, 0xc000299ba0}, {0x26f8fa0, 0xc0002ade00}, 0xc000ba46e0?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 42 lines ... Jan 28 17:16:57.564: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7726 to be removed Jan 28 17:16:57.618: INFO: Claim "azuredisk-7726" in namespace "pvc-z7wbp" doesn't exist in the system Jan 28 17:16:57.618: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-p27bb [1mSTEP:[0m dump namespace information after failure [38;5;243m01/28/23 17:16:57.675[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/28/23 17:16:57.675[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [195.826 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:747[0m ... skipping 7 lines ... Jan 28 17:13:42.799: INFO: >>> kubeConfig: /root/tmp3797534717/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:13:42.8[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:13:42.8[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:13:42.857[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:13:42.857[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:13:42.915[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:13:42.971[0m Jan 28 17:13:42.971: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-v5twk" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 28 17:13:43.026: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 55.428539ms Jan 28 17:13:45.083: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11169452s Jan 28 17:13:47.095: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123985161s Jan 28 17:13:49.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11126795s Jan 28 17:13:51.083: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112082121s Jan 28 17:13:53.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111257428s ... skipping 2 lines ... Jan 28 17:13:59.084: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.113301866s Jan 28 17:14:01.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 18.111599681s Jan 28 17:14:03.086: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115431709s Jan 28 17:14:05.081: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110244077s Jan 28 17:14:07.082: INFO: Pod "azuredisk-volume-tester-v5twk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.111427161s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:14:07.082[0m Jan 28 17:14:07.083: INFO: Pod "azuredisk-volume-tester-v5twk" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/28/23 17:14:07.083[0m [1mSTEP:[0m Prow test resource group: kubetest-g59foizt [38;5;243m01/28/23 17:14:07.083[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-2c1eb792-9f2f-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:14:07.084[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-2c1eb792-9f2f-11ed-9172-ae7499b6df38 [38;5;243m01/28/23 17:14:07.91[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/28/23 17:14:07.911[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/28/23 17:14:07.911[0m ... skipping 10 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:14:25.268[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:14:25.324[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:14:25.324[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:14:25.384[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/28/23 17:14:25.385[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/28/23 17:14:25.385[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:14:25.441[0m Jan 28 17:14:25.441: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-928j5" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 28 17:14:25.495: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.39609ms Jan 28 17:14:27.554: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11304283s Jan 28 17:14:29.550: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109504955s Jan 28 17:14:31.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111270806s Jan 28 17:14:33.554: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112693777s Jan 28 17:14:35.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111533784s Jan 28 17:14:37.551: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.110148153s Jan 28 17:14:39.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.111414488s Jan 28 17:14:41.551: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.110144143s Jan 28 17:14:43.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.111105762s Jan 28 17:14:45.553: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11187302s Jan 28 17:14:47.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.111203094s Jan 28 17:14:49.552: INFO: Pod "azuredisk-volume-tester-928j5": Phase="Failed", Reason="", readiness=false. Elapsed: 24.111442686s Jan 28 17:14:49.553: INFO: Unexpected error: <*fmt.wrapError | 0xc000bc59a0>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-928j5 to be Succeeded or Failed: pod \"azuredisk-volume-tester-928j5\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc0004e0d80>{ s: "pod \"azuredisk-volume-tester-928j5\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 28 17:14:49.553: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-928j5 to be Succeeded or Failed: pod "azuredisk-volume-tester-928j5" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000bb9d78, {0x270dda0, 0xc000299ba0}, {0x26f8fa0, 0xc0002ade00}, 0xc000ba46e0?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 43 lines ... Jan 28 17:16:57.618: INFO: Claim "azuredisk-7726" in namespace "pvc-z7wbp" doesn't exist in the system Jan 28 17:16:57.618: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-p27bb [1mSTEP:[0m dump namespace information after failure [38;5;243m01/28/23 17:16:57.675[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/28/23 17:16:57.675[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 28 17:14:49.553: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-928j5 to be Succeeded or Failed: pod "azuredisk-volume-tester-928j5" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-28 17:14:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.14 PodIPs:[{IP:10.248.0.14}] StartTime:2023-01-28 17:14:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-28 17:14:47 +0000 UTC,FinishedAt:2023-01-28 17:14:47 +0000 UTC,ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://205c17f94466eb47090eda522921b873ad9329efffaf4686e442e932ebcd9419 Started:0xc00071d97f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002123c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 25 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:16:58.818[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:16:58.873[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:16:58.873[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:16:58.928[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:16:58.928[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:16:58.983[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:16:59.04[0m Jan 28 17:16:59.040: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-774mq" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 28 17:16:59.096: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 55.645ms Jan 28 17:17:01.151: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110588499s Jan 28 17:17:03.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11158029s Jan 28 17:17:05.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111686845s Jan 28 17:17:07.153: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11250451s Jan 28 17:17:09.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111546976s ... skipping 9 lines ... Jan 28 17:17:29.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.111959329s Jan 28 17:17:31.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.111811752s Jan 28 17:17:33.154: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.114196944s Jan 28 17:17:35.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.111743367s Jan 28 17:17:37.151: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.110554534s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:17:37.151[0m Jan 28 17:17:37.151: INFO: Pod "azuredisk-volume-tester-774mq" satisfied condition "Succeeded or Failed" Jan 28 17:17:37.151: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-774mq" Jan 28 17:17:37.210: INFO: Pod azuredisk-volume-tester-774mq has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-774mq in namespace azuredisk-3086 [38;5;243m01/28/23 17:17:37.21[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:17:37.374[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:17:37.429[0m ... skipping 70 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:16:58.818[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:16:58.873[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:16:58.873[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:16:58.928[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:16:58.928[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:16:58.983[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:16:59.04[0m Jan 28 17:16:59.040: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-774mq" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 28 17:16:59.096: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 55.645ms Jan 28 17:17:01.151: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110588499s Jan 28 17:17:03.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11158029s Jan 28 17:17:05.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111686845s Jan 28 17:17:07.153: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11250451s Jan 28 17:17:09.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111546976s ... skipping 9 lines ... Jan 28 17:17:29.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.111959329s Jan 28 17:17:31.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.111811752s Jan 28 17:17:33.154: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.114196944s Jan 28 17:17:35.152: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.111743367s Jan 28 17:17:37.151: INFO: Pod "azuredisk-volume-tester-774mq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.110554534s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:17:37.151[0m Jan 28 17:17:37.151: INFO: Pod "azuredisk-volume-tester-774mq" satisfied condition "Succeeded or Failed" Jan 28 17:17:37.151: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-774mq" Jan 28 17:17:37.210: INFO: Pod azuredisk-volume-tester-774mq has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-774mq in namespace azuredisk-3086 [38;5;243m01/28/23 17:17:37.21[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:17:37.374[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:17:37.429[0m ... skipping 936 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:31:21.782[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:31:21.782[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:31:21.841[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:31:21.841[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:31:21.903[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:31:21.903[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:31:21.963[0m Jan 28 17:31:21.963: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ldgxp" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 28 17:31:22.021: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 57.596753ms Jan 28 17:31:24.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117399684s Jan 28 17:31:26.084: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121338166s Jan 28 17:31:28.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116453154s Jan 28 17:31:30.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117957022s Jan 28 17:31:32.082: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11853413s ... skipping 2 lines ... Jan 28 17:31:38.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116972756s Jan 28 17:31:40.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116921658s Jan 28 17:31:42.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117908246s Jan 28 17:31:44.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118093638s Jan 28 17:31:46.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118179587s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:31:46.081[0m Jan 28 17:31:46.081: INFO: Pod "azuredisk-volume-tester-ldgxp" satisfied condition "Succeeded or Failed" Jan 28 17:31:46.081: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-ldgxp" Jan 28 17:31:46.170: INFO: Pod azuredisk-volume-tester-ldgxp has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-ldgxp in namespace azuredisk-1092 [38;5;243m01/28/23 17:31:46.17[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:31:46.299[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:31:46.357[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/28/23 17:31:21.782[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/28/23 17:31:21.782[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/28/23 17:31:21.841[0m [1mSTEP:[0m creating a PVC [38;5;243m01/28/23 17:31:21.841[0m [1mSTEP:[0m setting up the pod [38;5;243m01/28/23 17:31:21.903[0m [1mSTEP:[0m deploying the pod [38;5;243m01/28/23 17:31:21.903[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/28/23 17:31:21.963[0m Jan 28 17:31:21.963: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ldgxp" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 28 17:31:22.021: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 57.596753ms Jan 28 17:31:24.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117399684s Jan 28 17:31:26.084: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121338166s Jan 28 17:31:28.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116453154s Jan 28 17:31:30.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117957022s Jan 28 17:31:32.082: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11853413s ... skipping 2 lines ... Jan 28 17:31:38.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116972756s Jan 28 17:31:40.080: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116921658s Jan 28 17:31:42.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117908246s Jan 28 17:31:44.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118093638s Jan 28 17:31:46.081: INFO: Pod "azuredisk-volume-tester-ldgxp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118179587s [1mSTEP:[0m Saw pod success [38;5;243m01/28/23 17:31:46.081[0m Jan 28 17:31:46.081: INFO: Pod "azuredisk-volume-tester-ldgxp" satisfied condition "Succeeded or Failed" Jan 28 17:31:46.081: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-ldgxp" Jan 28 17:31:46.170: INFO: Pod azuredisk-volume-tester-ldgxp has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-ldgxp in namespace azuredisk-1092 [38;5;243m01/28/23 17:31:46.17[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/28/23 17:31:46.299[0m [1mSTEP:[0m checking the PV [38;5;243m01/28/23 17:31:46.357[0m ... skipping 93 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0128 16:14:13.475803 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 e2e-test I0128 16:14:13.476768 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:14:13.507995 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:14:13.508029 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:14:13.508040 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:14:13.508122 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:14:13.510192 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:14:13.510258 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:14:13.510285 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0128 16:14:13.510667 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0128 16:14:13.510696 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0128 16:14:13.510704 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0128 16:14:13.510808 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0128 16:14:13.510836 1 azuredisk.go:192] disable UseInstanceMetadata for controller I0128 16:14:13.510847 1 azuredisk.go:204] cloud: AzurePublicCloud, location: westus2, rg: kubetest-g59foizt, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-24544908-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0128 16:14:13.522772 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount456095140' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount456095140: must be superuser to unmount. I0128 16:14:13.522846 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0128 16:14:13.523110 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0128 16:14:13.523360 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0128 16:14:13.523374 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0128 16:14:13.523381 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0128 16:14:13.523387 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 61 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0128 16:14:12.865102 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 e2e-test I0128 16:14:12.865654 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:14:12.895013 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:14:12.895037 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:14:12.895045 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:14:12.895076 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:14:12.895747 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:14:12.895790 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:14:12.895811 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0128 16:14:12.896558 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0128 16:14:12.896582 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0128 16:14:12.896589 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0128 16:14:12.896681 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0128 16:14:12.896704 1 azuredisk.go:192] disable UseInstanceMetadata for controller I0128 16:14:12.896740 1 azuredisk.go:204] cloud: AzurePublicCloud, location: westus2, rg: kubetest-g59foizt, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-24544908-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0128 16:14:12.899640 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount2303407499' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount2303407499: must be superuser to unmount. I0128 16:14:12.899674 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0128 16:14:12.899744 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0128 16:14:12.899778 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0128 16:14:12.899785 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0128 16:14:12.899793 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0128 16:14:12.899799 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 68 lines ... I0128 16:14:22.058294 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0128 16:14:22.161944 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 32353 I0128 16:14:22.169951 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:14:22.170021 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 to node k8s-agentpool-24544908-vmss000001 I0128 16:14:22.170097 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7:%!s(*provider.AttachDiskOptions=&{None pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 false 0})] I0128 16:14:22.170247 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7:%!s(*provider.AttachDiskOptions=&{None pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 false 0})]) I0128 16:14:22.979723 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7:%!s(*provider.AttachDiskOptions=&{None pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:14:38.164751 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:14:38.164791 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:14:38.164813 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:14:38.164827 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:14:38.164873 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=16.212059022 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:14:38.164897 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 39 lines ... I0128 16:15:43.663308 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7) returned with <nil> I0128 16:15:43.663348 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.226672192 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6c3a6aa4-5b78-4c83-b12f-ecd41f11d4f7" result_code="succeeded" I0128 16:15:43.663365 1 utils.go:84] GRPC response: {} I0128 16:15:49.337751 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0128 16:15:49.338253 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded","parameters":{"csi.storage.k8s.io/pv/name":"pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded","csi.storage.k8s.io/pvc/name":"pvc-pkqmp","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","networkAccessPolicy":"DenyAll","skuName":"Standard_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0128 16:15:49.339017 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:15:49.365966 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:15:49.365987 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:15:49.365995 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:15:49.366031 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:15:49.366404 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:15:49.366450 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:15:49.366471 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0128 16:15:53.800729 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded","csi.storage.k8s.io/pvc/name":"pvc-pkqmp","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","enableasyncattach":"false","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded"} I0128 16:15:53.822336 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1217 I0128 16:15:53.822575 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded to node k8s-agentpool-24544908-vmss000000 (vmState Succeeded). I0128 16:15:53.822602 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded to node k8s-agentpool-24544908-vmss000000 I0128 16:15:53.822640 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded lun 0 to node k8s-agentpool-24544908-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded false 0})] I0128 16:15:53.822672 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded false 0})]) I0128 16:15:53.977553 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:16:09.119277 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000000) successfully I0128 16:16:09.119312 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000000) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:16:09.119334 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded attached to node k8s-agentpool-24544908-vmss000000. I0128 16:16:09.119381 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded to node k8s-agentpool-24544908-vmss000000 successfully I0128 16:16:09.119451 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.296849198 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded" node="k8s-agentpool-24544908-vmss000000" result_code="succeeded" I0128 16:16:09.119468 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0128 16:16:52.340956 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded) returned with <nil> I0128 16:16:52.340985 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.168491215 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4e412ff3-f9c2-4eaf-8c13-0eff0794cded" result_code="succeeded" I0128 16:16:52.340998 1 utils.go:84] GRPC response: {} I0128 16:16:58.165166 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0128 16:16:58.165253 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":1099511627776},"name":"pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c","csi.storage.k8s.io/pvc/name":"pvc-rjqbm","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","perfProfile":"Basic","skuName":"Premium_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0128 16:16:58.165895 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:16:58.173946 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:16:58.173974 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:16:58.173982 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:16:58.174019 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:16:58.174383 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:16:58.174430 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:16:58.174446 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0128 16:17:01.208305 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c","csi.storage.k8s.io/pvc/name":"pvc-rjqbm","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","enableasyncattach":"false","perfProfile":"Basic","requestedsizegib":"1024","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c"} I0128 16:17:01.283081 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1336 I0128 16:17:01.283532 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:17:01.283562 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c to node k8s-agentpool-24544908-vmss000001 I0128 16:17:01.283597 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c false 0})] I0128 16:17:01.283662 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c false 0})]) I0128 16:17:01.501679 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:17:11.711066 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:17:11.711108 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:17:11.711134 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c attached to node k8s-agentpool-24544908-vmss000001. I0128 16:17:11.711152 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:17:11.711199 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.42766588 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-c251e6aa-9511-4e50-b09a-201a39c9e21c" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:17:11.711226 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0128 16:18:31.713311 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-798ba9b7-0290-4714-99fa-51a1ed445c25","csi.storage.k8s.io/pvc/name":"pvc-snzzx","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25"} I0128 16:18:31.734904 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 16:18:31.735241 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-798ba9b7-0290-4714-99fa-51a1ed445c25. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:18:31.735269 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 to node k8s-agentpool-24544908-vmss000001 I0128 16:18:31.735338 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 false 0})] I0128 16:18:31.735434 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 false 0})]) I0128 16:18:31.894136 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:18:42.035126 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:18:42.035165 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:18:42.035185 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:18:42.035200 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:18:42.035450 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.300000335 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:18:42.035482 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 40 lines ... I0128 16:19:46.087482 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3cf1d132-9451-47e9-981c-b828e2db623e","csi.storage.k8s.io/pvc/name":"pvc-t7cf5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e"} I0128 16:19:46.114006 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 16:19:46.114329 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-3cf1d132-9451-47e9-981c-b828e2db623e. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:19:46.114359 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e to node k8s-agentpool-24544908-vmss000001 I0128 16:19:46.114393 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3cf1d132-9451-47e9-981c-b828e2db623e false 0})] I0128 16:19:46.114426 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3cf1d132-9451-47e9-981c-b828e2db623e false 0})]) I0128 16:19:46.260781 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3cf1d132-9451-47e9-981c-b828e2db623e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:19:56.389396 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:19:56.389425 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:19:56.389443 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e attached to node k8s-agentpool-24544908-vmss000001. I0128 16:19:56.389455 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:19:56.389499 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.275184004 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:19:56.389515 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0128 16:20:56.102762 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1","csi.storage.k8s.io/pvc/name":"pvc-mk9zw","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1"} I0128 16:20:56.125525 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0128 16:20:56.125836 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:20:56.125869 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 to node k8s-agentpool-24544908-vmss000001 I0128 16:20:56.125903 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 false 0})] I0128 16:20:56.125943 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 false 0})]) I0128 16:20:56.279476 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:21:06.469840 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:21:06.469886 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:21:06.469906 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:21:06.469921 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:21:06.469963 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.344130078 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:21:06.469980 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 47 lines ... I0128 16:22:19.468944 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab to node k8s-agentpool-24544908-vmss000001 I0128 16:22:19.468978 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab false 0})] I0128 16:22:19.469010 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab false 0})]) I0128 16:22:19.512814 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0128 16:22:19.513068 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-1f7e2650-124b-4ce5-9265-38032db38b84. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:22:19.513098 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84 to node k8s-agentpool-24544908-vmss000001 I0128 16:22:20.175347 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:22:30.325727 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:22:30.325763 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:22:30.325820 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab attached to node k8s-agentpool-24544908-vmss000001. I0128 16:22:30.325843 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:22:30.325935 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.856981663 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-ed362e56-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-4ddac4b1-9192-474a-849b-3ff4852fe8ab" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:22:30.325958 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0128 16:22:30.326155 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84 lun 1 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1f7e2650-124b-4ce5-9265-38032db38b84 false 1})] I0128 16:22:30.326578 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1f7e2650-124b-4ce5-9265-38032db38b84 false 1})]) I0128 16:22:30.475522 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/microsoft.compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1f7e2650-124b-4ce5-9265-38032db38b84 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:22:40.598421 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:22:40.598454 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:22:40.598472 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:22:40.598492 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:22:40.598529 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.085456866 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-eda63463-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-1f7e2650-124b-4ce5-9265-38032db38b84" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:22:40.598541 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 67 lines ... I0128 16:25:53.528017 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1207 I0128 16:25:53.570235 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0128 16:25:53.577203 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-954f6546-ffdf-412f-ab76-66fb918e35b9. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:25:53.577239 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 to node k8s-agentpool-24544908-vmss000001 I0128 16:25:53.577287 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 false 0})] I0128 16:25:53.577318 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 false 0})]) I0128 16:25:53.704833 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:26:29.006607 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:26:29.006666 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:26:29.006897 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:26:29.007066 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:26:29.007250 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.478844754 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-954f6546-ffdf-412f-ab76-66fb918e35b9" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:26:29.007350 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0128 16:27:47.752223 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589","csi.storage.k8s.io/pvc/name":"pvc-jxds7","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589"} I0128 16:27:47.773899 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 16:27:47.774413 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:27:47.774457 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 to node k8s-agentpool-24544908-vmss000001 I0128 16:27:47.774627 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 false 0})] I0128 16:27:47.774810 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 false 0})]) I0128 16:27:47.940383 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:28:03.941154 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:28:03.941199 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:28:03.941231 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:28:03.941253 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:28:03.941339 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=16.166893317 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-b8f7a2e6-f9b4-4923-a35b-5052f0605589" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:28:03.941359 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0128 16:28:28.019167 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79","csi.storage.k8s.io/pvc/name":"pvc-s6qbs","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79"} I0128 16:28:28.042935 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1191 I0128 16:28:28.043195 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:28:28.043222 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 to node k8s-agentpool-24544908-vmss000001 I0128 16:28:28.043255 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 lun 1 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 false 1})] I0128 16:28:28.043298 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 false 1})]) I0128 16:28:28.211874 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:28:38.312688 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:28:38.314555 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:28:38.314584 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:28:38.314601 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:28:38.314668 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.271464527 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8f503aa2-9d41-4cd6-9eea-a4586e82cf79" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:28:38.314680 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 11 lines ... I0128 16:28:50.287734 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83","csi.storage.k8s.io/pvc/name":"pvc-25wwm","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83"} I0128 16:28:50.308694 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 16:28:50.309052 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 to node k8s-agentpool-24544908-vmss000000 (vmState Succeeded). I0128 16:28:50.309092 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 to node k8s-agentpool-24544908-vmss000000 I0128 16:28:50.309136 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 lun 0 to node k8s-agentpool-24544908-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 false 0})] I0128 16:28:50.309180 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 false 0})]) I0128 16:28:50.457563 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:29:00.566698 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000000) successfully I0128 16:29:00.566750 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000000) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:29:00.566775 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 attached to node k8s-agentpool-24544908-vmss000000. I0128 16:29:00.566793 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 to node k8s-agentpool-24544908-vmss000000 successfully I0128 16:29:00.566847 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.25779665 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83" node="k8s-agentpool-24544908-vmss000000" result_code="succeeded" I0128 16:29:00.566864 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 16 lines ... I0128 16:29:53.370676 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 from node k8s-agentpool-24544908-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83] E0128 16:29:53.370746 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-24544908-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83]) not found I0128 16:29:53.370781 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83]) I0128 16:29:55.044214 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0128 16:29:55.044238 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83"} I0128 16:29:55.044326 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83) I0128 16:29:55.044349 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83) since it's in attaching or detaching state I0128 16:29:55.044411 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=4.4801e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83" result_code="failed_csi_driver_controller_delete_volume" E0128 16:29:55.044430 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83) since it's in attaching or detaching state I0128 16:29:58.602416 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83:pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83]) returned with <nil> I0128 16:29:58.602471 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000000) successfully I0128 16:29:58.602672 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000000) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:29:58.602691 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83) succeeded I0128 16:29:58.602705 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83 from node k8s-agentpool-24544908-vmss000000 successfully I0128 16:29:58.602836 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.23229466 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-4ecb1615-1c8d-4577-bb43-8ed823a03b83" node="k8s-agentpool-24544908-vmss000000" result_code="succeeded" ... skipping 63 lines ... I0128 16:32:29.316663 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-g59foizt:{}] I0128 16:32:29.347486 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0128 16:32:29.347605 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:32:29.347651 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 to node k8s-agentpool-24544908-vmss000001 I0128 16:32:29.347795 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 false 0})] I0128 16:32:29.347860 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 false 0})]) I0128 16:32:29.552130 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:32:39.686442 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:32:39.686497 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:32:39.686530 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:32:39.686604 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:32:39.686759 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.369985273 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d58d3730-a75b-4fa8-8df3-23bf4e21d2a4" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:32:39.686855 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 57 lines ... I0128 16:35:12.354540 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:35:12.379990 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1217 I0128 16:35:12.380278 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:35:12.380309 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 to node k8s-agentpool-24544908-vmss000001 I0128 16:35:12.380342 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 false 0})] I0128 16:35:12.380382 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 false 0})]) I0128 16:35:12.527385 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:35:22.704199 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:35:22.704254 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:35:22.704288 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 attached to node k8s-agentpool-24544908-vmss000001. I0128 16:35:22.704315 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7 to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:35:22.704382 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.324089824 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:35:22.704414 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 36 lines ... I0128 16:50:54.473170 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-g59foizt:{}] I0128 16:50:54.527409 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0128 16:50:54.527542 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-01ba0080-221a-4049-871f-6c10509a024d. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 16:50:54.527579 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d to node k8s-agentpool-24544908-vmss000001 I0128 16:50:54.527634 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-01ba0080-221a-4049-871f-6c10509a024d false 0})] I0128 16:50:54.527689 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-01ba0080-221a-4049-871f-6c10509a024d false 0})]) I0128 16:50:54.900546 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-01ba0080-221a-4049-871f-6c10509a024d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 16:51:04.998360 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 16:51:04.998409 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 16:51:04.998440 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d attached to node k8s-agentpool-24544908-vmss000001. I0128 16:51:04.998466 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d to node k8s-agentpool-24544908-vmss000001 successfully I0128 16:51:04.998722 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.525325164 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 16:51:04.998754 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 74 lines ... I0128 17:06:47.144462 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 false 0})] I0128 17:06:47.144495 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 false 0})]) I0128 17:06:47.144777 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:06:47.144806 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a to node k8s-agentpool-24544908-vmss000001 I0128 17:06:47.144837 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-49ee30b9-5ee4-4371-9865-ddca32c17d93. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-49ee30b9-5ee4-4371-9865-ddca32c17d93 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:06:47.144876 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-49ee30b9-5ee4-4371-9865-ddca32c17d93 to node k8s-agentpool-24544908-vmss000001 I0128 17:06:48.516648 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:06:53.593802 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:06:53.593839 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:06:53.593904 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:06:53.593933 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:06:53.594015 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=6.470125675 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:06:53.594079 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0128 17:06:53.627760 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1421 I0128 17:06:53.628388 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 I0128 17:06:53.628429 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:06:53.628462 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2 is already attached to node k8s-agentpool-24544908-vmss000001 at lun 0. I0128 17:06:53.628568 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.000125401 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:06:53.628908 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0128 17:06:53.925310 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-49ee30b9-5ee4-4371-9865-ddca32c17d93:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-49ee30b9-5ee4-4371-9865-ddca32c17d93 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:07:04.051321 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:07:04.051377 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:07:04.051432 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a attached to node k8s-agentpool-24544908-vmss000001. I0128 17:07:04.051461 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:07:04.051540 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=16.927649555 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-64b008d1-cb20-4cd1-8f6e-144c79b2fe2a" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:07:04.051566 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 98 lines ... I0128 17:08:28.304634 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d to node k8s-agentpool-24544908-vmss000001 I0128 17:08:28.304666 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d false 0})] I0128 17:08:28.304702 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d false 0})]) I0128 17:08:28.323329 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 17:08:28.323987 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:08:28.324032 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c to node k8s-agentpool-24544908-vmss000001 I0128 17:08:28.472875 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:08:38.609081 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:08:38.609127 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:08:38.609188 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d attached to node k8s-agentpool-24544908-vmss000001. I0128 17:08:38.609211 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:08:38.609313 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.30465125 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:08:38.609367 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0128 17:08:38.673764 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1421 I0128 17:08:38.674136 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d I0128 17:08:38.674160 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:08:38.674174 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d is already attached to node k8s-agentpool-24544908-vmss000001 at lun 0. I0128 17:08:38.674292 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=9.8601e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-92c9ce85-3cda-4d3f-b41b-85158dc9137d" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:08:38.674310 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0128 17:08:38.806308 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:08:48.994894 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:08:48.994935 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:08:48.994987 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c attached to node k8s-agentpool-24544908-vmss000001. I0128 17:08:48.995007 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:08:48.995103 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=20.671085075 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:08:48.995127 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 74 lines ... I0128 17:10:31.222850 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69","csi.storage.k8s.io/pvc/name":"pvc-8qjcp","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69"} I0128 17:10:31.244923 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 17:10:31.245184 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:10:31.245213 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 to node k8s-agentpool-24544908-vmss000001 I0128 17:10:31.245245 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 false 0})] I0128 17:10:31.245303 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 false 0})]) I0128 17:10:31.368931 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:10:41.483508 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:10:41.483538 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:10:41.483555 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:10:41.483568 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:10:41.483606 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.238423921 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:10:41.483626 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 42 lines ... I0128 17:11:12.347652 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5b32988f-9e3e-454d-8300-25097e518575","csi.storage.k8s.io/pvc/name":"pvc-mfrtf","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575"} I0128 17:11:12.378132 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0128 17:11:12.378878 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5b32988f-9e3e-454d-8300-25097e518575. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:11:12.379119 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575 to node k8s-agentpool-24544908-vmss000001 I0128 17:11:12.379251 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b32988f-9e3e-454d-8300-25097e518575 false 0})] I0128 17:11:12.379360 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b32988f-9e3e-454d-8300-25097e518575 false 0})]) I0128 17:11:12.552252 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b32988f-9e3e-454d-8300-25097e518575 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:11:22.683191 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:11:22.683243 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:11:22.683277 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:11:22.683304 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:11:22.683369 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.304481787 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-5b32988f-9e3e-454d-8300-25097e518575" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:11:22.683394 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 46 lines ... I0128 17:13:46.088574 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-18a43891-4fc1-477a-afc7-d559dfeed026","csi.storage.k8s.io/pvc/name":"pvc-z7wbp","csi.storage.k8s.io/pvc/namespace":"azuredisk-7726","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026"} I0128 17:13:46.130556 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 17:13:46.130806 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-18a43891-4fc1-477a-afc7-d559dfeed026. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:13:46.130823 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026 to node k8s-agentpool-24544908-vmss000001 I0128 17:13:46.130850 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-18a43891-4fc1-477a-afc7-d559dfeed026 false 0})] I0128 17:13:46.130881 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-18a43891-4fc1-477a-afc7-d559dfeed026 false 0})]) I0128 17:13:46.283551 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-18a43891-4fc1-477a-afc7-d559dfeed026 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:14:01.425163 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:14:01.425199 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:14:01.425221 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:14:01.425235 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:14:01.425295 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.294466691 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-18a43891-4fc1-477a-afc7-d559dfeed026" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:14:01.425344 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 29 lines ... I0128 17:14:28.471560 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1066338d-eddd-4163-95e4-37ac3b29e050","csi.storage.k8s.io/pvc/name":"pvc-p6mwm","csi.storage.k8s.io/pvc/namespace":"azuredisk-7726","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050"} I0128 17:14:28.505228 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0128 17:14:28.505928 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-1066338d-eddd-4163-95e4-37ac3b29e050. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050 to node k8s-agentpool-24544908-vmss000000 (vmState Succeeded). I0128 17:14:28.505974 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050 to node k8s-agentpool-24544908-vmss000000 I0128 17:14:28.506175 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050 lun 0 to node k8s-agentpool-24544908-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1066338d-eddd-4163-95e4-37ac3b29e050 false 0})] I0128 17:14:28.506387 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1066338d-eddd-4163-95e4-37ac3b29e050 false 0})]) I0128 17:14:28.720852 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1066338d-eddd-4163-95e4-37ac3b29e050 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:14:38.970320 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000000) successfully I0128 17:14:38.970358 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000000) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:14:38.970379 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050 attached to node k8s-agentpool-24544908-vmss000000. I0128 17:14:38.970392 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050 to node k8s-agentpool-24544908-vmss000000 successfully I0128 17:14:38.970433 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.464522282 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1066338d-eddd-4163-95e4-37ac3b29e050" node="k8s-agentpool-24544908-vmss000000" result_code="succeeded" I0128 17:14:38.970447 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 87 lines ... I0128 17:17:02.143709 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1217 I0128 17:17:02.144204 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-175702ea-cd0e-40e9-9b68-155883d5f734. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:17:02.144265 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734 to node k8s-agentpool-24544908-vmss000001 I0128 17:17:02.149939 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1222 I0128 17:17:02.150421 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:17:02.150466 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd to node k8s-agentpool-24544908-vmss000001 I0128 17:17:03.099210 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-14ccc19b-a0c2-4ab1-b525-2cbddebd6000:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14ccc19b-a0c2-4ab1-b525-2cbddebd6000 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:17:13.224089 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:17:13.224131 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:17:13.224164 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-14ccc19b-a0c2-4ab1-b525-2cbddebd6000 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:17:13.224181 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-14ccc19b-a0c2-4ab1-b525-2cbddebd6000 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:17:13.224222 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.086207911 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-14ccc19b-a0c2-4ab1-b525-2cbddebd6000" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:17:13.224225 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734 lun 1 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-175702ea-cd0e-40e9-9b68-155883d5f734 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd false 2})] I0128 17:17:13.224237 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0128 17:17:13.224278 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-175702ea-cd0e-40e9-9b68-155883d5f734 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd false 2})]) I0128 17:17:13.450091 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-175702ea-cd0e-40e9-9b68-155883d5f734 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:17:23.577961 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:17:23.578012 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:17:23.578060 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:17:23.578087 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:17:23.578152 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.43394674 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-175702ea-cd0e-40e9-9b68-155883d5f734" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:17:23.578181 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-931ee5df-dacc-4233-a05c-c0a6a66a12cd lun 2 to node k8s-agentpool-24544908-vmss000001, diskMap: map[] ... skipping 116 lines ... I0128 17:19:13.515912 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-84794774-448f-4063-ad19-cb29bd9b0ba1","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-th4b4-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1"} I0128 17:19:13.540376 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0128 17:19:13.540693 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-84794774-448f-4063-ad19-cb29bd9b0ba1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:19:13.540727 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 to node k8s-agentpool-24544908-vmss000001 I0128 17:19:13.540766 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 false 0})] I0128 17:19:13.540810 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 false 0})]) I0128 17:19:13.782887 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:19:23.956251 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:19:23.956287 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:19:23.956308 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:19:23.956323 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:19:23.956366 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.415676596 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:19:23.956416 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 29 lines ... I0128 17:22:06.163866 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-g59foizt:{}] I0128 17:22:06.209788 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0128 17:22:06.209887 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-84794774-448f-4063-ad19-cb29bd9b0ba1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:22:06.209912 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 to node k8s-agentpool-24544908-vmss000001 I0128 17:22:06.209944 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 false 0})] I0128 17:22:06.209976 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 false 0})]) I0128 17:22:06.360779 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:22:16.531268 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:22:16.531319 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:22:16.531399 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:22:16.531471 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:22:16.531543 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.367635385 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:22:16.531565 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0128 17:22:42.723794 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-606f4a34-6095-4054-99d8-a270575712e9","csi.storage.k8s.io/pvc/name":"pvc-xq2zj","csi.storage.k8s.io/pvc/namespace":"azuredisk-4801","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com","tags":"disk=test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9"} I0128 17:22:42.745045 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0128 17:22:42.745371 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-606f4a34-6095-4054-99d8-a270575712e9. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:22:42.745397 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9 to node k8s-agentpool-24544908-vmss000001 I0128 17:22:42.745448 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9 lun 1 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-606f4a34-6095-4054-99d8-a270575712e9 false 1})] I0128 17:22:42.745510 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-606f4a34-6095-4054-99d8-a270575712e9 false 1})]) I0128 17:22:42.883045 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-606f4a34-6095-4054-99d8-a270575712e9 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:22:53.002495 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:22:53.002540 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:22:53.002561 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:22:53.002576 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:22:53.002617 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.257242796 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-606f4a34-6095-4054-99d8-a270575712e9" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:22:53.002633 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 24 lines ... I0128 17:23:26.690639 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 from node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:pvc-84794774-448f-4063-ad19-cb29bd9b0ba1] E0128 17:23:26.690675 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-24544908-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:pvc-84794774-448f-4063-ad19-cb29bd9b0ba1]) not found I0128 17:23:26.690688 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:pvc-84794774-448f-4063-ad19-cb29bd9b0ba1]) I0128 17:23:27.210958 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0128 17:23:27.210995 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1"} I0128 17:23:27.211126 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1) I0128 17:23:27.211152 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1) since it's in attaching or detaching state I0128 17:23:27.211227 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.0201e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1" result_code="failed_csi_driver_controller_delete_volume" E0128 17:23:27.211251 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1) since it's in attaching or detaching state I0128 17:23:31.980636 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1:pvc-84794774-448f-4063-ad19-cb29bd9b0ba1]) returned with <nil> I0128 17:23:31.980690 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:23:31.980706 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:23:31.980718 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-84794774-448f-4063-ad19-cb29bd9b0ba1, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1) succeeded I0128 17:23:31.980730 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1 from node k8s-agentpool-24544908-vmss000001 successfully I0128 17:23:31.980766 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.290258018 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84794774-448f-4063-ad19-cb29bd9b0ba1" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" ... skipping 41 lines ... I0128 17:24:17.910513 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-336b390e-449a-4bcc-8f97-ddc91d5df925","csi.storage.k8s.io/pvc/name":"pvc-7hhqc","csi.storage.k8s.io/pvc/namespace":"azuredisk-8154","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925"} I0128 17:24:17.934432 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0128 17:24:17.934772 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-336b390e-449a-4bcc-8f97-ddc91d5df925. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:24:17.934811 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 to node k8s-agentpool-24544908-vmss000001 I0128 17:24:17.934901 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 false 0})] I0128 17:24:17.934991 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 false 0})]) I0128 17:24:18.152069 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:24:28.234239 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:24:28.234271 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:24:28.234290 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:24:28.234310 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:24:28.234351 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.299582847 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-336b390e-449a-4bcc-8f97-ddc91d5df925" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:24:28.234371 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0128 17:25:55.070047 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-m67j4-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1166","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73"} I0128 17:25:55.094457 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1247 I0128 17:25:55.095120 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:25:55.095157 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 to node k8s-agentpool-24544908-vmss000001 I0128 17:25:55.095302 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 false 0})] I0128 17:25:55.095412 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 false 0})]) I0128 17:25:55.254524 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:26:05.346628 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:26:05.346678 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:26:05.346715 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:26:05.346740 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:26:05.346807 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.251673767 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-d0876eb2-fa0f-430b-8cea-3ae6d2e5ad73" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:26:05.346842 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 12 lines ... I0128 17:27:25.057596 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0128 17:27:25.105235 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0128 17:27:25.107607 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:27:25.107645 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b to node k8s-agentpool-24544908-vmss000001 I0128 17:27:25.107712 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b lun 1 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b false 1})] I0128 17:27:25.107779 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b false 1})]) I0128 17:27:25.274750 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:27:35.383610 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:27:35.383723 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:27:35.383784 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b attached to node k8s-agentpool-24544908-vmss000001. I0128 17:27:35.383823 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:27:35.383913 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.325958673 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-ef95360a-92a8-4ddd-a386-59a48a498a1b" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:27:35.383974 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 61 lines ... I0128 17:29:23.955414 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03","csi.storage.k8s.io/pvc/name":"pvc-wdghl","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03"} I0128 17:29:24.020959 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0128 17:29:24.023733 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:29:24.023774 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 to node k8s-agentpool-24544908-vmss000001 I0128 17:29:24.023815 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03:%!s(*provider.AttachDiskOptions=&{None pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 false 0})] I0128 17:29:24.023862 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03:%!s(*provider.AttachDiskOptions=&{None pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 false 0})]) I0128 17:29:24.208832 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03:%!s(*provider.AttachDiskOptions=&{None pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:29:25.255978 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0128 17:29:25.256001 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03","csi.storage.k8s.io/pvc/name":"pvc-wdghl","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03"} I0128 17:29:25.281606 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1692 I0128 17:29:25.282000 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 to node k8s-agentpool-24544908-vmss000000 (vmState Succeeded). I0128 17:29:25.282044 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 to node k8s-agentpool-24544908-vmss000000 I0128 17:29:25.282077 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 lun 0 to node k8s-agentpool-24544908-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03:%!s(*provider.AttachDiskOptions=&{None pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 false 0})] I0128 17:29:25.282112 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03:%!s(*provider.AttachDiskOptions=&{None pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 false 0})]) I0128 17:29:25.468619 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03:%!s(*provider.AttachDiskOptions=&{None pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:29:44.340300 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:29:44.340364 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:29:44.340410 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:29:44.340438 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:29:44.340520 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=20.316779496 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:29:44.340552 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 79 lines ... I0128 17:31:25.063881 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5","csi.storage.k8s.io/pvc/name":"pvc-q4r9q","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5"} I0128 17:31:25.087894 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1222 I0128 17:31:25.088281 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:31:25.088315 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 to node k8s-agentpool-24544908-vmss000001 I0128 17:31:25.088349 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 false 0})] I0128 17:31:25.088415 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 false 0})]) I0128 17:31:25.250630 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:31:35.372698 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:31:35.372735 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:31:35.372759 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:31:35.372774 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:31:35.372816 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.284534663 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:31:35.372838 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0128 17:32:32.494936 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2"} I0128 17:32:32.517213 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0128 17:32:32.517596 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:32:32.517637 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 to node k8s-agentpool-24544908-vmss000001 I0128 17:32:32.517677 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 lun 0 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 false 0})] I0128 17:32:32.517727 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 false 0})]) I0128 17:32:32.646979 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:32:47.771769 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:32:47.771806 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:32:47.771825 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:32:47.771840 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:32:47.771880 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.254294591 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-bc26abe6-d9b2-47c5-af1b-b89db58348f2" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:32:47.771897 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0128 17:32:57.303752 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8ffaae7a-0e71-4394-af47-210baaa88db5","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5"} I0128 17:32:57.328432 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0128 17:32:57.328985 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8ffaae7a-0e71-4394-af47-210baaa88db5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 to node k8s-agentpool-24544908-vmss000000 (vmState Succeeded). I0128 17:32:57.329030 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 to node k8s-agentpool-24544908-vmss000000 I0128 17:32:57.329075 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 lun 0 to node k8s-agentpool-24544908-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 false 0})] I0128 17:32:57.329185 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 false 0})]) I0128 17:32:57.509185 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:33:07.608519 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000000) successfully I0128 17:33:07.608566 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000000) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:33:07.608592 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 attached to node k8s-agentpool-24544908-vmss000000. I0128 17:33:07.608611 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5 to node k8s-agentpool-24544908-vmss000000 successfully I0128 17:33:07.608661 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.279675671 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5" node="k8s-agentpool-24544908-vmss000000" result_code="succeeded" I0128 17:33:07.608695 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0128 17:33:22.990831 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-24544908-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6113bece-5724-4663-b5b7-48acfa848805","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805"} I0128 17:33:23.014635 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1256 I0128 17:33:23.014924 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6113bece-5724-4663-b5b7-48acfa848805. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805 to node k8s-agentpool-24544908-vmss000001 (vmState Succeeded). I0128 17:33:23.014953 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805 to node k8s-agentpool-24544908-vmss000001 I0128 17:33:23.014987 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805 lun 1 to node k8s-agentpool-24544908-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6113bece-5724-4663-b5b7-48acfa848805 false 1})] I0128 17:33:23.015028 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6113bece-5724-4663-b5b7-48acfa848805 false 1})]) I0128 17:33:23.161152 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-g59foizt): vm(k8s-agentpool-24544908-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-g59foizt/providers/microsoft.compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6113bece-5724-4663-b5b7-48acfa848805 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0128 17:33:38.281560 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-g59foizt, k8s-agentpool-24544908-vmss, k8s-agentpool-24544908-vmss000001) successfully I0128 17:33:38.281589 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-24544908-vmss, kubetest-g59foizt, k8s-agentpool-24544908-vmss000001) for cacheKey(kubetest-g59foizt/k8s-agentpool-24544908-vmss) updated successfully I0128 17:33:38.281610 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805 attached to node k8s-agentpool-24544908-vmss000001. I0128 17:33:38.281622 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805 to node k8s-agentpool-24544908-vmss000001 successfully I0128 17:33:38.281665 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.266747814 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-g59foizt" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-6113bece-5724-4663-b5b7-48acfa848805" node="k8s-agentpool-24544908-vmss000001" result_code="succeeded" I0128 17:33:38.281683 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 21 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0128 16:14:05.892309 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 e2e-test I0128 16:14:05.894619 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:14:05.948891 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:14:05.948919 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:14:05.948931 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:14:05.948970 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:14:05.950167 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:14:05.950231 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:14:05.950394 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 68 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0128 16:14:08.481769 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 e2e-test I0128 16:14:08.482354 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:14:08.515139 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:14:08.515161 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:14:08.515168 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:14:08.515189 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:14:08.515936 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:14:08.515982 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:14:08.516007 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 202 lines ... I0128 17:30:35.974603 1 utils.go:84] GRPC response: {} I0128 17:30:36.009035 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0128 17:30:36.009055 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03"} I0128 17:30:36.009128 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 I0128 17:30:36.009148 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0128 17:30:36.009160 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 I0128 17:30:36.010702 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 I0128 17:30:36.010713 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03" I0128 17:30:36.010783 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 successfully I0128 17:30:36.010798 1 utils.go:84] GRPC response: {} I0128 17:33:13.152102 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:33:13.152137 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8ffaae7a-0e71-4394-af47-210baaa88db5","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-8ffaae7a-0e71-4394-af47-210baaa88db5"} I0128 17:33:14.761990 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 33 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0128 16:14:10.804054 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-8635ef7cb96ec669bd2a099af3b1437a19530391 e2e-test I0128 16:14:10.805622 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0128 16:14:10.834696 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0128 16:14:10.834732 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0128 16:14:10.834770 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0128 16:14:10.834807 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0128 16:14:10.835787 1 azure_auth.go:253] Using AzurePublicCloud environment I0128 16:14:10.835848 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0128 16:14:10.835961 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 147 lines ... I0128 16:18:49.421693 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:18:49.443108 1 mount_linux.go:570] Output: "" I0128 16:18:49.443156 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/disk/azure/scsi1/lun0] I0128 16:18:49.892009 1 mount_linux.go:539] Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount I0128 16:18:49.892052 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount I0128 16:18:49.892124 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount) E0128 16:18:49.909945 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:18:49.910158 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:18:50.481158 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:18:50.481187 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-798ba9b7-0290-4714-99fa-51a1ed445c25","csi.storage.k8s.io/pvc/name":"pvc-snzzx","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25"} I0128 16:18:52.319246 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:18:52.319287 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:18:52.319612 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount with mount options([invalid mount options]) I0128 16:18:52.319627 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:18:52.330822 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0128 16:18:52.330859 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:18:52.347055 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount I0128 16:18:52.347117 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount) E0128 16:18:52.368958 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:18:52.369027 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:18:53.412993 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:18:53.413020 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-798ba9b7-0290-4714-99fa-51a1ed445c25","csi.storage.k8s.io/pvc/name":"pvc-snzzx","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25"} I0128 16:18:55.196696 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:18:55.196747 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:18:55.197433 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount with mount options([invalid mount options]) I0128 16:18:55.197461 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:18:55.205946 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0128 16:18:55.205992 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:18:55.219449 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount I0128 16:18:55.219559 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount) E0128 16:18:55.238125 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:18:55.238395 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:18:57.276862 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:18:57.276895 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-798ba9b7-0290-4714-99fa-51a1ed445c25","csi.storage.k8s.io/pvc/name":"pvc-snzzx","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25"} I0128 16:18:59.051745 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:18:59.051786 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:18:59.052100 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount with mount options([invalid mount options]) I0128 16:18:59.052117 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:18:59.067921 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0128 16:18:59.067981 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:18:59.083907 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount I0128 16:18:59.083969 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount) E0128 16:18:59.102139 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:18:59.102621 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:19:03.157276 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:19:03.157305 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-798ba9b7-0290-4714-99fa-51a1ed445c25","csi.storage.k8s.io/pvc/name":"pvc-snzzx","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25"} I0128 16:19:04.993843 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:19:04.993889 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:19:04.994252 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount with mount options([invalid mount options]) I0128 16:19:04.994338 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:19:05.004377 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0128 16:19:05.004405 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:19:05.018955 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount I0128 16:19:05.019099 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount) E0128 16:19:05.037235 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:19:05.037565 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-798ba9b7-0290-4714-99fa-51a1ed445c25/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:20:01.950403 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:20:01.950437 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3cf1d132-9451-47e9-981c-b828e2db623e","csi.storage.k8s.io/pvc/name":"pvc-t7cf5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e"} I0128 16:20:03.733503 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:20:03.733553 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:20:03.733573 1 utils.go:84] GRPC response: {} I0128 16:20:03.743589 1 utils.go:77] GRPC call: /csi.v1.Node/NodePublishVolume ... skipping 16 lines ... I0128 16:20:09.722241 1 utils.go:84] GRPC response: {} I0128 16:20:09.746643 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0128 16:20:09.746691 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-3cf1d132-9451-47e9-981c-b828e2db623e"} I0128 16:20:09.746854 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e I0128 16:20:09.746901 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0128 16:20:09.746915 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e I0128 16:20:09.748758 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e I0128 16:20:09.748775 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e" I0128 16:20:09.748864 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3cf1d132-9451-47e9-981c-b828e2db623e successfully I0128 16:20:09.748879 1 utils.go:84] GRPC response: {} I0128 16:21:11.888514 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:21:11.888543 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1","csi.storage.k8s.io/pvc/name":"pvc-mk9zw","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-bb324370-9f27-11ed-9172-ae7499b6df38/providers/Microsoft.Compute/disks/pvc-ac1b7094-b293-4e6c-a5ea-e9dc28fe53a1"} I0128 16:21:13.732567 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 304 lines ... I0128 16:35:30.012084 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:35:30.027010 1 mount_linux.go:570] Output: "" I0128 16:35:30.027047 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/disk/azure/scsi1/lun0] I0128 16:35:30.909024 1 mount_linux.go:539] Disk successfully formatted (mkfs): xfs - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:30.909057 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:30.909089 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:35:31.034412 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:35:31.034605 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:35:31.552054 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:35:31.552097 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:35:33.420754 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:35:33.420805 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:35:33.422946 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:35:33.423014 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:35:33.434001 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:35:33.434036 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:35:33.447132 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:33.447192 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:35:33.460190 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:35:33.460255 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:35:34.482014 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:35:34.482046 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:35:36.292238 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:35:36.292290 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:35:36.292948 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:35:36.292975 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:35:36.321564 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:35:36.321599 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:35:36.337200 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:36.337284 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:35:36.358245 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:35:36.358301 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:35:38.451659 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:35:38.451686 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:35:40.216611 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:35:40.216667 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:35:40.217111 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:35:40.217127 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:35:40.227107 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:35:40.227143 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:35:40.241844 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:40.241892 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:35:40.257316 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:35:40.257394 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:35:44.306343 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:35:44.306371 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:35:46.097687 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:35:46.097766 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:35:46.098854 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:35:46.098891 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:35:46.111964 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:35:46.112030 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:35:46.129817 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:46.129868 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:35:46.187107 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:35:46.187216 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:35:54.229471 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:35:54.229505 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:35:56.048000 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:35:56.048058 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:35:56.048982 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:35:56.049004 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:35:56.068808 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:35:56.068844 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:35:56.089989 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:35:56.090251 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:35:56.109015 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:35:56.109082 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:36:12.142237 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:36:12.142269 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:36:14.014047 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:36:14.014085 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:36:14.014403 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:36:14.014419 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:36:14.023429 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:36:14.023463 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:36:14.035621 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:36:14.035661 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:36:14.050111 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:36:14.050197 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:36:46.160779 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:36:46.160807 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:36:47.959006 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:36:47.959062 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:36:47.959519 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:36:47.959556 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:36:47.966895 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:36:47.966928 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:36:47.974698 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:36:47.974743 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:36:47.988150 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:36:47.988212 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:37:52.075260 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:37:52.075286 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:37:53.853672 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:37:53.853714 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:37:53.854080 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:37:53.854096 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:37:53.864776 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:37:53.864819 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:37:53.875027 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:37:53.875065 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:37:53.885980 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:37:53.886081 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:39:55.935320 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:39:55.935350 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:39:57.777853 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:39:57.777912 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:39:57.778392 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:39:57.778441 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:39:57.789920 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:39:57.789952 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:39:57.800098 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:39:57.800142 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:39:57.815112 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:39:57.815169 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:41:59.912465 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:41:59.912928 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:42:01.693007 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:42:01.693060 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:42:01.693494 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:42:01.693524 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:42:01.709155 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:42:01.709205 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:42:01.721946 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:42:01.721999 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:42:01.736707 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:42:01.736766 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:44:03.814516 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:44:03.814561 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:44:05.644581 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:44:05.644634 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:44:05.645059 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:44:05.645095 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:44:05.659191 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:44:05.659241 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:44:05.670385 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:44:05.670428 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:44:05.683168 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:44:05.683232 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:46:07.762841 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:46:07.762878 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:46:09.568861 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:46:09.568933 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:46:09.569353 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:46:09.569370 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:46:09.590831 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:46:09.590881 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:46:09.601048 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:46:09.601100 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:46:09.614607 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:46:09.614664 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:48:11.680796 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:48:11.680823 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7","csi.storage.k8s.io/pvc/name":"pvc-b6lbd","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7"} I0128 16:48:13.475666 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:48:13.475725 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType Standard_LRS I0128 16:48:13.476223 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount with mount options([nouuid]) I0128 16:48:13.476256 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:48:13.485871 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:48:13.485904 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:48:13.493895 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount I0128 16:48:13.493941 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount) E0128 16:48:13.506664 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:48:13.506721 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-020fd008-cb37-4fa9-a2e3-fd0175d4f1e7/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:10.251181 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:10.251209 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:12.018928 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:12.018979 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:12.019388 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:12.019416 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:12.034465 1 mount_linux.go:570] Output: "" I0128 16:51:12.034505 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/disk/azure/scsi1/lun0] I0128 16:51:12.577727 1 mount_linux.go:539] Disk successfully formatted (mkfs): xfs - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:12.577765 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:12.577789 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:12.597743 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:12.597822 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:13.122595 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:13.122625 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:14.946029 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:14.946096 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:14.946583 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:14.946614 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:14.953765 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:51:14.953790 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:51:14.961415 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:14.961471 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:14.979107 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:14.979157 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:16.060362 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:16.060397 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:17.835912 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:17.835961 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:17.838538 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:17.838566 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:17.852364 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:51:17.852402 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:51:17.863059 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:17.863104 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:17.873507 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:17.873561 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:19.926875 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:19.926902 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:21.741688 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:21.741742 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:21.742216 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:21.742248 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:21.753076 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:51:21.753113 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:51:21.770204 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:21.770273 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:21.783850 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:21.783916 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:25.840182 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:25.840213 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:27.630232 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:27.630284 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:27.630795 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:27.630826 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:27.640682 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:51:27.640717 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:51:27.647971 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:27.648018 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:27.661138 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:27.661193 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:35.762524 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:35.762552 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:37.595079 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:37.595133 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:37.595473 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:37.595489 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:37.603880 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:51:37.604006 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:51:37.613814 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:37.613856 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:37.624222 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:37.624276 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:51:53.720114 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:51:53.720183 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:51:55.559409 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:51:55.559462 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:51:55.559931 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:51:55.559982 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:51:55.569374 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:51:55.569407 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:51:55.581228 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:51:55.581329 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:51:55.594095 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:51:55.594150 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:52:27.716944 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:52:27.716974 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:52:29.557476 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:52:29.557536 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:52:29.557838 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:52:29.557853 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:52:29.567662 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:52:29.567697 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:52:29.576769 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:52:29.576827 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:52:29.587025 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:52:29.587077 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:53:33.698939 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:53:33.698967 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:53:35.489673 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:53:35.489714 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:53:35.490145 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:53:35.490184 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:53:35.496861 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:53:35.496883 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:53:35.505294 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:53:35.505329 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:53:35.518367 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:53:35.518432 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:55:37.629980 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:55:37.630011 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:55:39.445693 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:55:39.445816 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:55:39.446138 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:55:39.446179 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:55:39.456229 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:55:39.456603 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:55:39.467714 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:55:39.467760 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:55:39.478459 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:55:39.478513 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:57:41.535334 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:57:41.535665 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:57:43.357506 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:57:43.357563 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:57:43.357998 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:57:43.358048 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:57:43.369485 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:57:43.369530 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:57:43.386463 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:57:43.386516 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:57:43.398430 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:57:43.398486 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 16:59:45.444834 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 16:59:45.444863 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 16:59:47.256058 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 16:59:47.256118 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 16:59:47.257828 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 16:59:47.257860 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 16:59:47.269643 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 16:59:47.269678 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 16:59:47.283208 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 16:59:47.283257 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 16:59:47.297283 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 16:59:47.297337 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 17:01:49.403363 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:01:49.403401 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 17:01:51.211046 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 17:01:51.211094 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 17:01:51.211491 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 17:01:51.211517 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 17:01:51.225130 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 17:01:51.225179 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 17:01:51.234995 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 17:01:51.235030 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 17:01:51.246842 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 17:01:51.246899 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 17:03:53.325360 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:03:53.325394 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 17:03:55.114755 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 17:03:55.114798 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 17:03:55.115092 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 17:03:55.115107 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 17:03:55.125582 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 17:03:55.125617 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 17:03:55.134850 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 17:03:55.134895 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 17:03:55.145267 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 17:03:55.145330 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 17:05:57.216861 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:05:57.216890 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-01ba0080-221a-4049-871f-6c10509a024d","csi.storage.k8s.io/pvc/name":"pvc-qw4d2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-01ba0080-221a-4049-871f-6c10509a024d"} I0128 17:05:59.162265 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 17:05:59.162310 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 17:05:59.162730 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount with mount options([nouuid]) I0128 17:05:59.162764 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0128 17:05:59.172110 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=xfs\n" I0128 17:05:59.172145 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0128 17:05:59.181307 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount I0128 17:05:59.181355 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount) E0128 17:05:59.193523 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. E0128 17:05:59.193620 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t xfs -o nouuid,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-01ba0080-221a-4049-871f-6c10509a024d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I0128 17:07:02.967350 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:07:02.967375 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2","csi.storage.k8s.io/pvc/name":"pvc-fp7gm","csi.storage.k8s.io/pvc/namespace":"azuredisk-1598","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2"} I0128 17:07:03.757283 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0128 17:07:03.757320 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0128 17:07:03.757742 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-84011bdf-3e14-4639-a482-e5a3260cc1a2/globalmount with mount options([]) I0128 17:07:03.757754 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 180 lines ... I0128 17:09:06.761889 1 utils.go:84] GRPC response: {} I0128 17:09:06.849597 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0128 17:09:06.849641 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c"} I0128 17:09:06.849706 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c I0128 17:09:06.849735 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0128 17:09:06.849749 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c I0128 17:09:06.851192 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c I0128 17:09:06.851213 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c" I0128 17:09:06.851344 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-dcbb48c8-ba2d-485a-b2b2-816f5549426c successfully I0128 17:09:06.851364 1 utils.go:84] GRPC response: {} I0128 17:10:47.031596 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:10:47.031622 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69","csi.storage.k8s.io/pvc/name":"pvc-8qjcp","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-1c2fca7a-1c70-4610-b4f7-ca5d4b136b69"} I0128 17:10:48.801890 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 594 lines ... I0128 17:30:35.361400 1 utils.go:84] GRPC response: {} I0128 17:30:35.390175 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0128 17:30:35.390201 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03"} I0128 17:30:35.390282 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 I0128 17:30:35.390313 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0128 17:30:35.390330 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 I0128 17:30:35.392417 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 I0128 17:30:35.392436 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03" I0128 17:30:35.392547 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-a7ab9d16-7192-4419-a772-f4218c5e2b03 successfully I0128 17:30:35.392567 1 utils.go:84] GRPC response: {} I0128 17:31:40.773188 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0128 17:31:40.773215 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5","csi.storage.k8s.io/pvc/name":"pvc-q4r9q","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674922453038-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-g59foizt/providers/Microsoft.Compute/disks/pvc-7a9c4c05-e757-480d-9d37-71cd362aa9a5"} I0128 17:31:42.617847 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 169 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 16889 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 4096 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 751 lines ... cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="300"} 44 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="600"} 44 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="1200"} 44 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="+Inf"} 44 cloudprovider_azure_op_duration_seconds_sum{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 718.2197872319999 cloudprovider_azure_op_duration_seconds_count{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 44 # HELP cloudprovider_azure_op_failure_count [ALPHA] Number of failed Azure service operations # TYPE cloudprovider_azure_op_failure_count counter cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_delete_volume",resource_group="kubetest-g59foizt",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 2 # HELP disabled_metric_total [ALPHA] The count of disabled metrics. # TYPE disabled_metric_total counter disabled_metric_total 0 # HELP go_cgo_go_to_c_calls_calls_total Count of calls made from Go to C by the current process. ... skipping 67 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 34718 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 45862 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 272 lines ... [AfterSuite] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:165[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 3 Failures:[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should clone a volume from an existing volume and read from it [disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should clone a volume of larger size than the source volume and make sure the filesystem is appropriately adjusted [disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[1mRan 26 of 66 Specs in 5286.342 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m23 Passed[0m | [38;5;9m[1m3 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m40 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (5286.34s) FAIL FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 5286.408s FAIL make: *** [Makefile:261: e2e-test] Error 1 2023/01/28 17:34:38 process.go:155: Step 'make e2e-test' finished in 1h29m48.682861543s 2023/01/28 17:34:38 aksengine_helpers.go:425: downloading /root/tmp3797534717/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/28 17:34:38 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/28 17:34:38 process.go:153: Running: chmod +x /root/tmp3797534717/log-dump.sh 2023/01/28 17:34:38 process.go:155: Step 'chmod +x /root/tmp3797534717/log-dump.sh' finished in 2.874479ms 2023/01/28 17:34:38 aksengine_helpers.go:425: downloading /root/tmp3797534717/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 63 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/28 17:35:12 process.go:155: Step 'bash -c /root/tmp3797534717/win-ci-logs-collector.sh kubetest-g59foizt.westus2.cloudapp.azure.com /root/tmp3797534717 /root/.ssh/id_rsa' finished in 4.410981ms 2023/01/28 17:35:12 aksengine.go:1141: Deleting resource group: kubetest-g59foizt. 2023/01/28 17:41:17 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/28 17:41:17 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/28 17:41:17 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 282.230079ms 2023/01/28 17:41:17 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker ba28cff0692b ... skipping 4 lines ...