Recent runs || View in Spyglass
PR | andyzhangx: fix: panic when allow-empty-cloud-config is set |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h16m |
Revision | 811840a4df93d2694ed7545ed50a580b66eaa102 |
Refs |
1699 |
job-version | v1.27.0-alpha.1.73+8e642d3d0deab2 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.73+8e642d3d0deab2 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 107 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 184k 0 --:--:-- --:--:-- --:--:-- 184k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 not found: manifest unknown: manifest tagged by "v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2023-01-29T07:53:12Z -extldflags "-static"" -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1758 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 865 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 216 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:03:48.115[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:03:48.115[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:03:48.175[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:03:48.175[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:03:48.24[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:03:48.24[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:03:48.303[0m Jan 29 08:03:48.303: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6gbv7" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 29 08:03:48.362: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 59.410832ms Jan 29 08:03:50.422: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119530344s Jan 29 08:03:52.422: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119690413s Jan 29 08:03:54.425: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122399451s Jan 29 08:03:56.424: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121059074s Jan 29 08:03:58.423: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120099505s ... skipping 6 lines ... Jan 29 08:04:12.424: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.12103211s Jan 29 08:04:14.424: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.120705054s Jan 29 08:04:16.426: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.122916104s Jan 29 08:04:18.425: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.122493665s Jan 29 08:04:20.426: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.122822529s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:04:20.426[0m Jan 29 08:04:20.426: INFO: Pod "azuredisk-volume-tester-6gbv7" satisfied condition "Succeeded or Failed" Jan 29 08:04:20.426: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-6gbv7" Jan 29 08:04:20.535: INFO: Pod azuredisk-volume-tester-6gbv7 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-6gbv7 in namespace azuredisk-8081 [38;5;243m01/29/23 08:04:20.535[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:04:20.663[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:04:20.723[0m ... skipping 57 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:03:48.115[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:03:48.115[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:03:48.175[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:03:48.175[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:03:48.24[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:03:48.24[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:03:48.303[0m Jan 29 08:03:48.303: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6gbv7" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 29 08:03:48.362: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 59.410832ms Jan 29 08:03:50.422: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119530344s Jan 29 08:03:52.422: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119690413s Jan 29 08:03:54.425: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122399451s Jan 29 08:03:56.424: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121059074s Jan 29 08:03:58.423: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120099505s ... skipping 6 lines ... Jan 29 08:04:12.424: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.12103211s Jan 29 08:04:14.424: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.120705054s Jan 29 08:04:16.426: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.122916104s Jan 29 08:04:18.425: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.122493665s Jan 29 08:04:20.426: INFO: Pod "azuredisk-volume-tester-6gbv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.122822529s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:04:20.426[0m Jan 29 08:04:20.426: INFO: Pod "azuredisk-volume-tester-6gbv7" satisfied condition "Succeeded or Failed" Jan 29 08:04:20.426: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-6gbv7" Jan 29 08:04:20.535: INFO: Pod azuredisk-volume-tester-6gbv7 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-6gbv7 in namespace azuredisk-8081 [38;5;243m01/29/23 08:04:20.535[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:04:20.663[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:04:20.723[0m ... skipping 39 lines ... Jan 29 08:05:04.675: INFO: PersistentVolumeClaim pvc-x5lv5 found but phase is Pending instead of Bound. Jan 29 08:05:06.737: INFO: PersistentVolumeClaim pvc-x5lv5 found and phase=Bound (4.18252976s) [1mSTEP:[0m checking the PVC [38;5;243m01/29/23 08:05:06.738[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:05:06.799[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:05:06.86[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:05:06.86[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/29/23 08:05:06.922[0m Jan 29 08:05:06.923: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2tqh9" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 29 08:05:06.981: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.912043ms Jan 29 08:05:09.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120030354s Jan 29 08:05:11.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119304861s Jan 29 08:05:13.041: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118717457s Jan 29 08:05:15.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120923265s Jan 29 08:05:17.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120304193s Jan 29 08:05:19.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119247889s Jan 29 08:05:21.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120639766s Jan 29 08:05:23.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119803307s Jan 29 08:05:25.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118962175s Jan 29 08:05:27.044: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.121702211s Jan 29 08:05:29.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.119969201s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:05:29.043[0m Jan 29 08:05:29.043: INFO: Pod "azuredisk-volume-tester-2tqh9" satisfied condition "Succeeded or Failed" Jan 29 08:05:29.043: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-2tqh9" Jan 29 08:05:29.144: INFO: Pod azuredisk-volume-tester-2tqh9 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-2tqh9 in namespace azuredisk-2540 [38;5;243m01/29/23 08:05:29.144[0m Jan 29 08:05:29.213: INFO: deleting PVC "azuredisk-2540"/"pvc-x5lv5" Jan 29 08:05:29.213: INFO: Deleting PersistentVolumeClaim "pvc-x5lv5" ... skipping 38 lines ... Jan 29 08:05:04.675: INFO: PersistentVolumeClaim pvc-x5lv5 found but phase is Pending instead of Bound. Jan 29 08:05:06.737: INFO: PersistentVolumeClaim pvc-x5lv5 found and phase=Bound (4.18252976s) [1mSTEP:[0m checking the PVC [38;5;243m01/29/23 08:05:06.738[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:05:06.799[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:05:06.86[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:05:06.86[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/29/23 08:05:06.922[0m Jan 29 08:05:06.923: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2tqh9" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 29 08:05:06.981: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.912043ms Jan 29 08:05:09.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120030354s Jan 29 08:05:11.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119304861s Jan 29 08:05:13.041: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118717457s Jan 29 08:05:15.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120923265s Jan 29 08:05:17.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120304193s Jan 29 08:05:19.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119247889s Jan 29 08:05:21.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120639766s Jan 29 08:05:23.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119803307s Jan 29 08:05:25.042: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118962175s Jan 29 08:05:27.044: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.121702211s Jan 29 08:05:29.043: INFO: Pod "azuredisk-volume-tester-2tqh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.119969201s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:05:29.043[0m Jan 29 08:05:29.043: INFO: Pod "azuredisk-volume-tester-2tqh9" satisfied condition "Succeeded or Failed" Jan 29 08:05:29.043: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-2tqh9" Jan 29 08:05:29.144: INFO: Pod azuredisk-volume-tester-2tqh9 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-2tqh9 in namespace azuredisk-2540 [38;5;243m01/29/23 08:05:29.144[0m Jan 29 08:05:29.213: INFO: deleting PVC "azuredisk-2540"/"pvc-x5lv5" Jan 29 08:05:29.213: INFO: Deleting PersistentVolumeClaim "pvc-x5lv5" ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:06:10.91[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:06:10.91[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:06:10.971[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:06:10.971[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:06:11.035[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:06:11.035[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:06:11.097[0m Jan 29 08:06:11.097: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2vhtk" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 29 08:06:11.157: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 59.687109ms Jan 29 08:06:13.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120282674s Jan 29 08:06:15.216: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119277423s Jan 29 08:06:17.216: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119545764s Jan 29 08:06:19.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120038054s Jan 29 08:06:21.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119958729s ... skipping 13 lines ... Jan 29 08:06:49.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 38.119724494s Jan 29 08:06:51.218: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 40.121554711s Jan 29 08:06:53.218: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 42.121003495s Jan 29 08:06:55.220: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 44.122951886s Jan 29 08:06:57.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.119727816s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:06:57.217[0m Jan 29 08:06:57.217: INFO: Pod "azuredisk-volume-tester-2vhtk" satisfied condition "Succeeded or Failed" Jan 29 08:06:57.217: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-2vhtk" Jan 29 08:06:57.321: INFO: Pod azuredisk-volume-tester-2vhtk has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-2vhtk in namespace azuredisk-4728 [38;5;243m01/29/23 08:06:57.321[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:06:57.445[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:06:57.505[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:06:10.91[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:06:10.91[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:06:10.971[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:06:10.971[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:06:11.035[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:06:11.035[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:06:11.097[0m Jan 29 08:06:11.097: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2vhtk" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 29 08:06:11.157: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 59.687109ms Jan 29 08:06:13.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120282674s Jan 29 08:06:15.216: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119277423s Jan 29 08:06:17.216: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119545764s Jan 29 08:06:19.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120038054s Jan 29 08:06:21.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119958729s ... skipping 13 lines ... Jan 29 08:06:49.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 38.119724494s Jan 29 08:06:51.218: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 40.121554711s Jan 29 08:06:53.218: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 42.121003495s Jan 29 08:06:55.220: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Pending", Reason="", readiness=false. Elapsed: 44.122951886s Jan 29 08:06:57.217: INFO: Pod "azuredisk-volume-tester-2vhtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.119727816s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:06:57.217[0m Jan 29 08:06:57.217: INFO: Pod "azuredisk-volume-tester-2vhtk" satisfied condition "Succeeded or Failed" Jan 29 08:06:57.217: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-2vhtk" Jan 29 08:06:57.321: INFO: Pod azuredisk-volume-tester-2vhtk has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-2vhtk in namespace azuredisk-4728 [38;5;243m01/29/23 08:06:57.321[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:06:57.445[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:06:57.505[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:07:39.304[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:07:39.304[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:07:39.367[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:07:39.367[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/29/23 08:07:39.43[0m Jan 29 08:08:17.549: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-blcmx" Jan 29 08:08:17.611: INFO: Error getting logs for pod azuredisk-volume-tester-blcmx: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-blcmx) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-blcmx in namespace azuredisk-5466 [38;5;243m01/29/23 08:08:17.611[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:08:17.733[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:08:17.792[0m Jan 29 08:08:17.792: INFO: deleting PVC "azuredisk-5466"/"pvc-9n7bq" Jan 29 08:08:17.792: INFO: Deleting PersistentVolumeClaim "pvc-9n7bq" [1mSTEP:[0m waiting for claim's PV "pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc" to be deleted [38;5;243m01/29/23 08:08:17.853[0m ... skipping 32 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:07:39.304[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:07:39.304[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:07:39.367[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:07:39.367[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/29/23 08:07:39.43[0m Jan 29 08:08:17.549: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-blcmx" Jan 29 08:08:17.611: INFO: Error getting logs for pod azuredisk-volume-tester-blcmx: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-blcmx) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-blcmx in namespace azuredisk-5466 [38;5;243m01/29/23 08:08:17.611[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:08:17.733[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:08:17.792[0m Jan 29 08:08:17.792: INFO: deleting PVC "azuredisk-5466"/"pvc-9n7bq" Jan 29 08:08:17.792: INFO: Deleting PersistentVolumeClaim "pvc-9n7bq" [1mSTEP:[0m waiting for claim's PV "pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc" to be deleted [38;5;243m01/29/23 08:08:17.853[0m ... skipping 29 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:08:59.442[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:08:59.443[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:08:59.503[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:08:59.503[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:08:59.566[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:08:59.566[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:08:59.629[0m Jan 29 08:08:59.629: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dv2zv" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 29 08:08:59.689: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 60.08148ms Jan 29 08:09:01.758: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128509429s Jan 29 08:09:03.755: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125756178s Jan 29 08:09:05.749: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12015192s Jan 29 08:09:07.751: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122421443s Jan 29 08:09:09.750: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.121130009s ... skipping 3 lines ... Jan 29 08:09:17.750: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1211511s Jan 29 08:09:19.749: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120405783s Jan 29 08:09:21.749: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119902836s Jan 29 08:09:23.756: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Running", Reason="", readiness=true. Elapsed: 24.126836626s Jan 29 08:09:25.751: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.121500523s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:09:25.751[0m Jan 29 08:09:25.751: INFO: Pod "azuredisk-volume-tester-dv2zv" satisfied condition "Succeeded or Failed" Jan 29 08:09:25.751: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-dv2zv" Jan 29 08:09:25.825: INFO: Pod azuredisk-volume-tester-dv2zv has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-dv2zv in namespace azuredisk-2790 [38;5;243m01/29/23 08:09:25.825[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:09:25.953[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:09:26.012[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:08:59.442[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:08:59.443[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:08:59.503[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:08:59.503[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:08:59.566[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:08:59.566[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:08:59.629[0m Jan 29 08:08:59.629: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dv2zv" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 29 08:08:59.689: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 60.08148ms Jan 29 08:09:01.758: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128509429s Jan 29 08:09:03.755: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125756178s Jan 29 08:09:05.749: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12015192s Jan 29 08:09:07.751: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122421443s Jan 29 08:09:09.750: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.121130009s ... skipping 3 lines ... Jan 29 08:09:17.750: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1211511s Jan 29 08:09:19.749: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120405783s Jan 29 08:09:21.749: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119902836s Jan 29 08:09:23.756: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Running", Reason="", readiness=true. Elapsed: 24.126836626s Jan 29 08:09:25.751: INFO: Pod "azuredisk-volume-tester-dv2zv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.121500523s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:09:25.751[0m Jan 29 08:09:25.751: INFO: Pod "azuredisk-volume-tester-dv2zv" satisfied condition "Succeeded or Failed" Jan 29 08:09:25.751: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-dv2zv" Jan 29 08:09:25.825: INFO: Pod azuredisk-volume-tester-dv2zv has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-dv2zv in namespace azuredisk-2790 [38;5;243m01/29/23 08:09:25.825[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:09:25.953[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:09:26.012[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:10:09.17[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:10:09.171[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:10:09.171[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:10:09.233[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:10:09.233[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:10:09.295[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:10:09.357[0m Jan 29 08:10:09.358: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hpz7g" in namespace "azuredisk-5429" to be "Succeeded or Failed" Jan 29 08:10:09.417: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 59.237943ms Jan 29 08:10:11.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120103147s Jan 29 08:10:13.479: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120984328s Jan 29 08:10:15.477: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119644386s Jan 29 08:10:17.479: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12090353s Jan 29 08:10:19.477: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119605928s ... skipping 2 lines ... Jan 29 08:10:25.480: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 16.122067103s Jan 29 08:10:27.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120785884s Jan 29 08:10:29.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120082938s Jan 29 08:10:31.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 22.120670107s Jan 29 08:10:33.477: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119580751s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:10:33.477[0m Jan 29 08:10:33.477: INFO: Pod "azuredisk-volume-tester-hpz7g" satisfied condition "Succeeded or Failed" Jan 29 08:10:33.477: INFO: deleting Pod "azuredisk-5429"/"azuredisk-volume-tester-hpz7g" Jan 29 08:10:33.540: INFO: Pod azuredisk-volume-tester-hpz7g has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hpz7g in namespace azuredisk-5429 [38;5;243m01/29/23 08:10:33.54[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:10:33.664[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:10:33.723[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:10:09.17[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:10:09.171[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:10:09.171[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:10:09.233[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:10:09.233[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:10:09.295[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:10:09.357[0m Jan 29 08:10:09.358: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hpz7g" in namespace "azuredisk-5429" to be "Succeeded or Failed" Jan 29 08:10:09.417: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 59.237943ms Jan 29 08:10:11.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120103147s Jan 29 08:10:13.479: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120984328s Jan 29 08:10:15.477: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119644386s Jan 29 08:10:17.479: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12090353s Jan 29 08:10:19.477: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119605928s ... skipping 2 lines ... Jan 29 08:10:25.480: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 16.122067103s Jan 29 08:10:27.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120785884s Jan 29 08:10:29.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120082938s Jan 29 08:10:31.478: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Pending", Reason="", readiness=false. Elapsed: 22.120670107s Jan 29 08:10:33.477: INFO: Pod "azuredisk-volume-tester-hpz7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119580751s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:10:33.477[0m Jan 29 08:10:33.477: INFO: Pod "azuredisk-volume-tester-hpz7g" satisfied condition "Succeeded or Failed" Jan 29 08:10:33.477: INFO: deleting Pod "azuredisk-5429"/"azuredisk-volume-tester-hpz7g" Jan 29 08:10:33.540: INFO: Pod azuredisk-volume-tester-hpz7g has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hpz7g in namespace azuredisk-5429 [38;5;243m01/29/23 08:10:33.54[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:10:33.664[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:10:33.723[0m ... skipping 44 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:11:32.713[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:11:32.713[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:11:32.713[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:11:32.775[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:11:32.775[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:11:32.836[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:11:32.896[0m Jan 29 08:11:32.896: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vt2gz" in namespace "azuredisk-3090" to be "Succeeded or Failed" Jan 29 08:11:32.955: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 59.170135ms Jan 29 08:11:35.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119602218s Jan 29 08:11:37.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119341675s Jan 29 08:11:39.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118928049s Jan 29 08:11:41.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119659771s Jan 29 08:11:43.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119437048s ... skipping 9 lines ... Jan 29 08:12:03.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.11906969s Jan 29 08:12:05.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 32.120064744s Jan 29 08:12:07.017: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 34.120374538s Jan 29 08:12:09.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 36.120346365s Jan 29 08:12:11.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.119202553s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:12:11.015[0m Jan 29 08:12:11.016: INFO: Pod "azuredisk-volume-tester-vt2gz" satisfied condition "Succeeded or Failed" Jan 29 08:12:11.016: INFO: deleting Pod "azuredisk-3090"/"azuredisk-volume-tester-vt2gz" Jan 29 08:12:11.114: INFO: Pod azuredisk-volume-tester-vt2gz has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-vt2gz in namespace azuredisk-3090 [38;5;243m01/29/23 08:12:11.114[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:12:11.238[0m ... skipping 63 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:11:32.713[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:11:32.713[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:11:32.713[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:11:32.775[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:11:32.775[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:11:32.836[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:11:32.896[0m Jan 29 08:11:32.896: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vt2gz" in namespace "azuredisk-3090" to be "Succeeded or Failed" Jan 29 08:11:32.955: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 59.170135ms Jan 29 08:11:35.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119602218s Jan 29 08:11:37.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119341675s Jan 29 08:11:39.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118928049s Jan 29 08:11:41.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119659771s Jan 29 08:11:43.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119437048s ... skipping 9 lines ... Jan 29 08:12:03.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.11906969s Jan 29 08:12:05.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 32.120064744s Jan 29 08:12:07.017: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 34.120374538s Jan 29 08:12:09.016: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Pending", Reason="", readiness=false. Elapsed: 36.120346365s Jan 29 08:12:11.015: INFO: Pod "azuredisk-volume-tester-vt2gz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.119202553s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:12:11.015[0m Jan 29 08:12:11.016: INFO: Pod "azuredisk-volume-tester-vt2gz" satisfied condition "Succeeded or Failed" Jan 29 08:12:11.016: INFO: deleting Pod "azuredisk-3090"/"azuredisk-volume-tester-vt2gz" Jan 29 08:12:11.114: INFO: Pod azuredisk-volume-tester-vt2gz has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-vt2gz in namespace azuredisk-3090 [38;5;243m01/29/23 08:12:11.114[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:12:11.238[0m ... skipping 53 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:14:06.113[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:14:06.113[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:14:06.174[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:14:06.175[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:14:06.237[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:14:06.237[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/29/23 08:14:06.304[0m Jan 29 08:14:06.304: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-jr87r" in namespace "azuredisk-6159" to be "Error status code" Jan 29 08:14:06.362: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 58.25983ms Jan 29 08:14:08.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118946966s Jan 29 08:14:10.433: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129367408s Jan 29 08:14:12.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118909607s Jan 29 08:14:14.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118397437s Jan 29 08:14:16.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119575716s ... skipping 24 lines ... Jan 29 08:15:06.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.118164621s Jan 29 08:15:08.428: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.12372955s Jan 29 08:15:10.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.119281778s Jan 29 08:15:12.424: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.120447686s Jan 29 08:15:14.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.118301934s Jan 29 08:15:16.424: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.119876069s Jan 29 08:15:18.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.118574779s [1mSTEP:[0m Saw pod failure [38;5;243m01/29/23 08:15:18.423[0m Jan 29 08:15:18.423: INFO: Pod "azuredisk-volume-tester-jr87r" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/29/23 08:15:18.423[0m Jan 29 08:15:18.518: INFO: deleting Pod "azuredisk-6159"/"azuredisk-volume-tester-jr87r" Jan 29 08:15:18.581: INFO: Pod azuredisk-volume-tester-jr87r has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-jr87r in namespace azuredisk-6159 [38;5;243m01/29/23 08:15:18.581[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:15:18.703[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:14:06.113[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:14:06.113[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:14:06.174[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:14:06.175[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:14:06.237[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:14:06.237[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/29/23 08:14:06.304[0m Jan 29 08:14:06.304: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-jr87r" in namespace "azuredisk-6159" to be "Error status code" Jan 29 08:14:06.362: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 58.25983ms Jan 29 08:14:08.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118946966s Jan 29 08:14:10.433: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129367408s Jan 29 08:14:12.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118909607s Jan 29 08:14:14.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118397437s Jan 29 08:14:16.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119575716s ... skipping 24 lines ... Jan 29 08:15:06.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.118164621s Jan 29 08:15:08.428: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.12372955s Jan 29 08:15:10.423: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.119281778s Jan 29 08:15:12.424: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.120447686s Jan 29 08:15:14.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.118301934s Jan 29 08:15:16.424: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.119876069s Jan 29 08:15:18.422: INFO: Pod "azuredisk-volume-tester-jr87r": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.118574779s [1mSTEP:[0m Saw pod failure [38;5;243m01/29/23 08:15:18.423[0m Jan 29 08:15:18.423: INFO: Pod "azuredisk-volume-tester-jr87r" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/29/23 08:15:18.423[0m Jan 29 08:15:18.518: INFO: deleting Pod "azuredisk-6159"/"azuredisk-volume-tester-jr87r" Jan 29 08:15:18.581: INFO: Pod azuredisk-volume-tester-jr87r has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-jr87r in namespace azuredisk-6159 [38;5;243m01/29/23 08:15:18.581[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:15:18.703[0m ... skipping 655 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:23:18.241[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:23:18.241[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:23:18.303[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:23:18.303[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:23:18.366[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:23:18.367[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:23:18.429[0m Jan 29 08:23:18.429: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rb6ft" in namespace "azuredisk-9241" to be "Succeeded or Failed" Jan 29 08:23:18.488: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 58.892867ms Jan 29 08:23:20.549: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120114691s Jan 29 08:23:22.550: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12112567s Jan 29 08:23:24.548: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118971106s Jan 29 08:23:26.550: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120571193s Jan 29 08:23:28.547: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11794729s ... skipping 2 lines ... Jan 29 08:23:34.549: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 16.12021599s Jan 29 08:23:36.550: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120456067s Jan 29 08:23:38.548: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 20.119161274s Jan 29 08:23:40.551: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 22.122410343s Jan 29 08:23:42.549: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.11956017s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:23:42.549[0m Jan 29 08:23:42.549: INFO: Pod "azuredisk-volume-tester-rb6ft" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 08:23:42.549[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 08:23:47.549[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:23:47.672[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:23:47.672[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:23:47.734[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 08:23:47.735[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:23:47.796[0m Jan 29 08:23:47.796: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-v9qzf" in namespace "azuredisk-9241" to be "Succeeded or Failed" Jan 29 08:23:47.856: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 59.041554ms Jan 29 08:23:49.917: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120961695s Jan 29 08:23:51.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119200734s Jan 29 08:23:53.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119866491s Jan 29 08:23:55.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119973818s Jan 29 08:23:57.917: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120180307s ... skipping 2 lines ... Jan 29 08:24:03.935: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.138810204s Jan 29 08:24:05.917: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120386216s Jan 29 08:24:07.920: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.123367913s Jan 29 08:24:09.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119871828s Jan 29 08:24:11.919: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.122354749s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:24:11.919[0m Jan 29 08:24:11.919: INFO: Pod "azuredisk-volume-tester-v9qzf" satisfied condition "Succeeded or Failed" Jan 29 08:24:11.919: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-v9qzf" Jan 29 08:24:12.007: INFO: Pod azuredisk-volume-tester-v9qzf has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-v9qzf in namespace azuredisk-9241 [38;5;243m01/29/23 08:24:12.007[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:24:12.135[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:24:12.194[0m ... skipping 53 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:23:18.241[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:23:18.241[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:23:18.303[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:23:18.303[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:23:18.366[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:23:18.367[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:23:18.429[0m Jan 29 08:23:18.429: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rb6ft" in namespace "azuredisk-9241" to be "Succeeded or Failed" Jan 29 08:23:18.488: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 58.892867ms Jan 29 08:23:20.549: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120114691s Jan 29 08:23:22.550: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12112567s Jan 29 08:23:24.548: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118971106s Jan 29 08:23:26.550: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120571193s Jan 29 08:23:28.547: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11794729s ... skipping 2 lines ... Jan 29 08:23:34.549: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 16.12021599s Jan 29 08:23:36.550: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120456067s Jan 29 08:23:38.548: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 20.119161274s Jan 29 08:23:40.551: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Pending", Reason="", readiness=false. Elapsed: 22.122410343s Jan 29 08:23:42.549: INFO: Pod "azuredisk-volume-tester-rb6ft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.11956017s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:23:42.549[0m Jan 29 08:23:42.549: INFO: Pod "azuredisk-volume-tester-rb6ft" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 08:23:42.549[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 08:23:47.549[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:23:47.672[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:23:47.672[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:23:47.734[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 08:23:47.735[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:23:47.796[0m Jan 29 08:23:47.796: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-v9qzf" in namespace "azuredisk-9241" to be "Succeeded or Failed" Jan 29 08:23:47.856: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 59.041554ms Jan 29 08:23:49.917: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120961695s Jan 29 08:23:51.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119200734s Jan 29 08:23:53.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119866491s Jan 29 08:23:55.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119973818s Jan 29 08:23:57.917: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120180307s ... skipping 2 lines ... Jan 29 08:24:03.935: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.138810204s Jan 29 08:24:05.917: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120386216s Jan 29 08:24:07.920: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.123367913s Jan 29 08:24:09.916: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119871828s Jan 29 08:24:11.919: INFO: Pod "azuredisk-volume-tester-v9qzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.122354749s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:24:11.919[0m Jan 29 08:24:11.919: INFO: Pod "azuredisk-volume-tester-v9qzf" satisfied condition "Succeeded or Failed" Jan 29 08:24:11.919: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-v9qzf" Jan 29 08:24:12.007: INFO: Pod azuredisk-volume-tester-v9qzf has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-v9qzf in namespace azuredisk-9241 [38;5;243m01/29/23 08:24:12.007[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:24:12.135[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:24:12.194[0m ... skipping 52 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:25:34.829[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:25:34.829[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:25:34.891[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:25:34.891[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:25:34.953[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:25:34.953[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:25:35.017[0m Jan 29 08:25:35.017: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-n5m2q" in namespace "azuredisk-9336" to be "Succeeded or Failed" Jan 29 08:25:35.076: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 59.468945ms Jan 29 08:25:37.136: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119098718s Jan 29 08:25:39.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120006076s Jan 29 08:25:41.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120063732s Jan 29 08:25:43.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120298847s Jan 29 08:25:45.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120432858s ... skipping 2 lines ... Jan 29 08:25:51.136: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119115175s Jan 29 08:25:53.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120460005s Jan 29 08:25:55.138: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.121132512s Jan 29 08:25:57.136: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119244923s Jan 29 08:25:59.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120419117s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:25:59.137[0m Jan 29 08:25:59.138: INFO: Pod "azuredisk-volume-tester-n5m2q" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 08:25:59.138[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 08:26:04.138[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:26:04.257[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:26:04.257[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:26:04.321[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 08:26:04.321[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:26:04.384[0m Jan 29 08:26:04.384: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5hw2b" in namespace "azuredisk-9336" to be "Succeeded or Failed" Jan 29 08:26:04.443: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.693432ms Jan 29 08:26:06.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119463574s Jan 29 08:26:08.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119457195s Jan 29 08:26:10.504: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120136139s Jan 29 08:26:12.502: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118440355s Jan 29 08:26:14.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11903061s ... skipping 10 lines ... Jan 29 08:26:36.505: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.120621456s Jan 29 08:26:38.505: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.121089975s Jan 29 08:26:40.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119537654s Jan 29 08:26:42.514: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.129631207s Jan 29 08:26:44.504: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.1204697s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:26:44.504[0m Jan 29 08:26:44.505: INFO: Pod "azuredisk-volume-tester-5hw2b" satisfied condition "Succeeded or Failed" Jan 29 08:26:44.505: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-5hw2b" Jan 29 08:26:44.574: INFO: Pod azuredisk-volume-tester-5hw2b has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-5hw2b in namespace azuredisk-9336 [38;5;243m01/29/23 08:26:44.575[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:26:44.703[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:26:44.762[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:25:34.829[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:25:34.829[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:25:34.891[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:25:34.891[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:25:34.953[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:25:34.953[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:25:35.017[0m Jan 29 08:25:35.017: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-n5m2q" in namespace "azuredisk-9336" to be "Succeeded or Failed" Jan 29 08:25:35.076: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 59.468945ms Jan 29 08:25:37.136: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119098718s Jan 29 08:25:39.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120006076s Jan 29 08:25:41.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120063732s Jan 29 08:25:43.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120298847s Jan 29 08:25:45.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120432858s ... skipping 2 lines ... Jan 29 08:25:51.136: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119115175s Jan 29 08:25:53.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120460005s Jan 29 08:25:55.138: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.121132512s Jan 29 08:25:57.136: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119244923s Jan 29 08:25:59.137: INFO: Pod "azuredisk-volume-tester-n5m2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120419117s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:25:59.137[0m Jan 29 08:25:59.138: INFO: Pod "azuredisk-volume-tester-n5m2q" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 08:25:59.138[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 08:26:04.138[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:26:04.257[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:26:04.257[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:26:04.321[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 08:26:04.321[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:26:04.384[0m Jan 29 08:26:04.384: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5hw2b" in namespace "azuredisk-9336" to be "Succeeded or Failed" Jan 29 08:26:04.443: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.693432ms Jan 29 08:26:06.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119463574s Jan 29 08:26:08.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119457195s Jan 29 08:26:10.504: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120136139s Jan 29 08:26:12.502: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118440355s Jan 29 08:26:14.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11903061s ... skipping 10 lines ... Jan 29 08:26:36.505: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.120621456s Jan 29 08:26:38.505: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.121089975s Jan 29 08:26:40.503: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119537654s Jan 29 08:26:42.514: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.129631207s Jan 29 08:26:44.504: INFO: Pod "azuredisk-volume-tester-5hw2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.1204697s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:26:44.504[0m Jan 29 08:26:44.505: INFO: Pod "azuredisk-volume-tester-5hw2b" satisfied condition "Succeeded or Failed" Jan 29 08:26:44.505: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-5hw2b" Jan 29 08:26:44.574: INFO: Pod azuredisk-volume-tester-5hw2b has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-5hw2b in namespace azuredisk-9336 [38;5;243m01/29/23 08:26:44.575[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:26:44.703[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:26:44.762[0m ... skipping 56 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:27:37.294[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:27:37.294[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:27:37.354[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:27:37.354[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:27:37.418[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:27:37.418[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:27:37.48[0m Jan 29 08:27:37.480: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-w67nn" in namespace "azuredisk-2205" to be "Succeeded or Failed" Jan 29 08:27:37.539: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 58.558446ms Jan 29 08:27:39.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119293994s Jan 29 08:27:41.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118486706s Jan 29 08:27:43.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118515243s Jan 29 08:27:45.600: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119793997s Jan 29 08:27:47.600: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119509962s ... skipping 26 lines ... Jan 29 08:28:41.606: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.126269292s Jan 29 08:28:43.598: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118353788s Jan 29 08:28:45.600: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.120104453s Jan 29 08:28:47.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.118673092s Jan 29 08:28:49.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.118992338s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:28:49.599[0m Jan 29 08:28:49.599: INFO: Pod "azuredisk-volume-tester-w67nn" satisfied condition "Succeeded or Failed" Jan 29 08:28:49.599: INFO: deleting Pod "azuredisk-2205"/"azuredisk-volume-tester-w67nn" Jan 29 08:28:49.662: INFO: Pod azuredisk-volume-tester-w67nn has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-w67nn in namespace azuredisk-2205 [38;5;243m01/29/23 08:28:49.662[0m ... skipping 69 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:27:37.294[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:27:37.294[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:27:37.354[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:27:37.354[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:27:37.418[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:27:37.418[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:27:37.48[0m Jan 29 08:27:37.480: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-w67nn" in namespace "azuredisk-2205" to be "Succeeded or Failed" Jan 29 08:27:37.539: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 58.558446ms Jan 29 08:27:39.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119293994s Jan 29 08:27:41.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118486706s Jan 29 08:27:43.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118515243s Jan 29 08:27:45.600: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119793997s Jan 29 08:27:47.600: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119509962s ... skipping 26 lines ... Jan 29 08:28:41.606: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.126269292s Jan 29 08:28:43.598: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118353788s Jan 29 08:28:45.600: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.120104453s Jan 29 08:28:47.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.118673092s Jan 29 08:28:49.599: INFO: Pod "azuredisk-volume-tester-w67nn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.118992338s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:28:49.599[0m Jan 29 08:28:49.599: INFO: Pod "azuredisk-volume-tester-w67nn" satisfied condition "Succeeded or Failed" Jan 29 08:28:49.599: INFO: deleting Pod "azuredisk-2205"/"azuredisk-volume-tester-w67nn" Jan 29 08:28:49.662: INFO: Pod azuredisk-volume-tester-w67nn has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-w67nn in namespace azuredisk-2205 [38;5;243m01/29/23 08:28:49.662[0m ... skipping 63 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:29:52.63[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:29:52.63[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:29:52.692[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:29:52.692[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:29:52.75[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:29:52.751[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:29:52.81[0m Jan 29 08:29:52.811: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mgdtt" in namespace "azuredisk-8010" to be "Succeeded or Failed" Jan 29 08:29:52.869: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 58.081923ms Jan 29 08:29:54.931: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120155482s Jan 29 08:29:56.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117025588s Jan 29 08:29:58.927: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116780481s Jan 29 08:30:00.929: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118192602s Jan 29 08:30:02.930: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119590086s ... skipping 26 lines ... Jan 29 08:30:56.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.117635793s Jan 29 08:30:58.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.117472882s Jan 29 08:31:00.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117443637s Jan 29 08:31:02.929: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.118556553s Jan 29 08:31:04.936: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.12572357s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:31:04.936[0m Jan 29 08:31:04.937: INFO: Pod "azuredisk-volume-tester-mgdtt" satisfied condition "Succeeded or Failed" Jan 29 08:31:04.937: INFO: deleting Pod "azuredisk-8010"/"azuredisk-volume-tester-mgdtt" Jan 29 08:31:05.037: INFO: Pod azuredisk-volume-tester-mgdtt has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.081217 seconds, 1.2GB/s hello world ... skipping 53 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:29:52.63[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:29:52.63[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:29:52.692[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:29:52.692[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:29:52.75[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:29:52.751[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:29:52.81[0m Jan 29 08:29:52.811: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mgdtt" in namespace "azuredisk-8010" to be "Succeeded or Failed" Jan 29 08:29:52.869: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 58.081923ms Jan 29 08:29:54.931: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120155482s Jan 29 08:29:56.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117025588s Jan 29 08:29:58.927: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116780481s Jan 29 08:30:00.929: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118192602s Jan 29 08:30:02.930: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119590086s ... skipping 26 lines ... Jan 29 08:30:56.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.117635793s Jan 29 08:30:58.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.117472882s Jan 29 08:31:00.928: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117443637s Jan 29 08:31:02.929: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.118556553s Jan 29 08:31:04.936: INFO: Pod "azuredisk-volume-tester-mgdtt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.12572357s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:31:04.936[0m Jan 29 08:31:04.937: INFO: Pod "azuredisk-volume-tester-mgdtt" satisfied condition "Succeeded or Failed" Jan 29 08:31:04.937: INFO: deleting Pod "azuredisk-8010"/"azuredisk-volume-tester-mgdtt" Jan 29 08:31:05.037: INFO: Pod azuredisk-volume-tester-mgdtt has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.081217 seconds, 1.2GB/s hello world ... skipping 46 lines ... Jan 29 08:31:57.403: INFO: >>> kubeConfig: /root/tmp2938072239/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:31:57.404[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:31:57.405[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:31:57.468[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:31:57.468[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:31:57.53[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:31:57.589[0m Jan 29 08:31:57.589: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xg8vj" in namespace "azuredisk-8591" to be "Succeeded or Failed" Jan 29 08:31:57.648: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 58.685124ms Jan 29 08:31:59.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11770745s Jan 29 08:32:01.710: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120869559s Jan 29 08:32:03.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117140008s Jan 29 08:32:05.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117768803s Jan 29 08:32:07.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118254596s ... skipping 26 lines ... Jan 29 08:33:01.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.118604726s Jan 29 08:33:03.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118063086s Jan 29 08:33:05.709: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.119910606s Jan 29 08:33:07.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.118451984s Jan 29 08:33:09.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.117605801s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:33:09.707[0m Jan 29 08:33:09.707: INFO: Pod "azuredisk-volume-tester-xg8vj" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 08:33:09.707[0m 2023/01/29 08:33:09 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/29 08:33:09 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-biyqdrb7 [38;5;243m01/29/23 08:33:09.708[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-8faf842b-9faf-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:33:09.709[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-8faf842b-9faf-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:33:10.576[0m ... skipping 5 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:33:25.764[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:33:25.764[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:33:25.824[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:33:25.824[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:33:25.894[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 08:33:25.894[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:33:25.956[0m Jan 29 08:33:25.956: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zp566" in namespace "azuredisk-8591" to be "Succeeded or Failed" Jan 29 08:33:26.014: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 57.395101ms Jan 29 08:33:28.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116954439s Jan 29 08:33:30.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116573048s Jan 29 08:33:32.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116559069s Jan 29 08:33:34.072: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115833569s Jan 29 08:33:36.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116913036s Jan 29 08:33:38.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 12.116499889s Jan 29 08:33:40.074: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118018108s Jan 29 08:33:42.074: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118204114s Jan 29 08:33:44.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116450593s Jan 29 08:33:46.072: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116142904s Jan 29 08:33:48.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Failed", Reason="", readiness=false. Elapsed: 22.116453442s Jan 29 08:33:48.073: INFO: Unexpected error: <*fmt.wrapError | 0xc000e36700>: { msg: "error while waiting for pod azuredisk-8591/azuredisk-volume-tester-zp566 to be Succeeded or Failed: pod \"azuredisk-volume-tester-zp566\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc000347b40>{ s: "pod \"azuredisk-volume-tester-zp566\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 29 08:33:48.073: FAIL: error while waiting for pod azuredisk-8591/azuredisk-volume-tester-zp566 to be Succeeded or Failed: pod "azuredisk-volume-tester-zp566" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000e3fd78, {0x270dda0, 0xc000c4fba0}, {0x26f8fa0, 0xc0006f1cc0}, 0xc000b50c60?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 39 lines ... Jan 29 08:35:56.055: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8591 to be removed Jan 29 08:35:56.112: INFO: Claim "azuredisk-8591" in namespace "pvc-wnrpv" doesn't exist in the system Jan 29 08:35:56.112: INFO: deleting StorageClass azuredisk-8591-disk.csi.azure.com-dynamic-sc-24788 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/29/23 08:35:56.174[0m [1mSTEP:[0m Destroying namespace "azuredisk-8591" for this suite. [38;5;243m01/29/23 08:35:56.174[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [239.750 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should create a pod, write and read to it, take a volume snapshot, and create another pod from the snapshot [disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:707[0m ... skipping 7 lines ... Jan 29 08:31:57.403: INFO: >>> kubeConfig: /root/tmp2938072239/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:31:57.404[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:31:57.405[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:31:57.468[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:31:57.468[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:31:57.53[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:31:57.589[0m Jan 29 08:31:57.589: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xg8vj" in namespace "azuredisk-8591" to be "Succeeded or Failed" Jan 29 08:31:57.648: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 58.685124ms Jan 29 08:31:59.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11770745s Jan 29 08:32:01.710: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120869559s Jan 29 08:32:03.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117140008s Jan 29 08:32:05.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117768803s Jan 29 08:32:07.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118254596s ... skipping 26 lines ... Jan 29 08:33:01.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.118604726s Jan 29 08:33:03.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118063086s Jan 29 08:33:05.709: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.119910606s Jan 29 08:33:07.708: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.118451984s Jan 29 08:33:09.707: INFO: Pod "azuredisk-volume-tester-xg8vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.117605801s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:33:09.707[0m Jan 29 08:33:09.707: INFO: Pod "azuredisk-volume-tester-xg8vj" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 08:33:09.707[0m [1mSTEP:[0m Prow test resource group: kubetest-biyqdrb7 [38;5;243m01/29/23 08:33:09.708[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-8faf842b-9faf-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:33:09.709[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-8faf842b-9faf-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:33:10.576[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/29/23 08:33:10.577[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/29/23 08:33:10.577[0m ... skipping 3 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:33:25.764[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:33:25.764[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:33:25.824[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:33:25.824[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:33:25.894[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 08:33:25.894[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:33:25.956[0m Jan 29 08:33:25.956: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zp566" in namespace "azuredisk-8591" to be "Succeeded or Failed" Jan 29 08:33:26.014: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 57.395101ms Jan 29 08:33:28.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116954439s Jan 29 08:33:30.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116573048s Jan 29 08:33:32.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116559069s Jan 29 08:33:34.072: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115833569s Jan 29 08:33:36.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116913036s Jan 29 08:33:38.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 12.116499889s Jan 29 08:33:40.074: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118018108s Jan 29 08:33:42.074: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118204114s Jan 29 08:33:44.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116450593s Jan 29 08:33:46.072: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116142904s Jan 29 08:33:48.073: INFO: Pod "azuredisk-volume-tester-zp566": Phase="Failed", Reason="", readiness=false. Elapsed: 22.116453442s Jan 29 08:33:48.073: INFO: Unexpected error: <*fmt.wrapError | 0xc000e36700>: { msg: "error while waiting for pod azuredisk-8591/azuredisk-volume-tester-zp566 to be Succeeded or Failed: pod \"azuredisk-volume-tester-zp566\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc000347b40>{ s: "pod \"azuredisk-volume-tester-zp566\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 29 08:33:48.073: FAIL: error while waiting for pod azuredisk-8591/azuredisk-volume-tester-zp566 to be Succeeded or Failed: pod "azuredisk-volume-tester-zp566" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000e3fd78, {0x270dda0, 0xc000c4fba0}, {0x26f8fa0, 0xc0006f1cc0}, 0xc000b50c60?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 40 lines ... Jan 29 08:35:56.112: INFO: Claim "azuredisk-8591" in namespace "pvc-wnrpv" doesn't exist in the system Jan 29 08:35:56.112: INFO: deleting StorageClass azuredisk-8591-disk.csi.azure.com-dynamic-sc-24788 [1mSTEP:[0m dump namespace information after failure [38;5;243m01/29/23 08:35:56.174[0m [1mSTEP:[0m Destroying namespace "azuredisk-8591" for this suite. [38;5;243m01/29/23 08:35:56.174[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 29 08:33:48.073: error while waiting for pod azuredisk-8591/azuredisk-volume-tester-zp566 to be Succeeded or Failed: pod "azuredisk-volume-tester-zp566" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:33:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.19 PodIPs:[{IP:10.248.0.19}] StartTime:2023-01-29 08:33:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:33:47 +0000 UTC,FinishedAt:2023-01-29 08:33:47 +0000 UTC,ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4fb92565c4dd25db7506b0882bd2e73c942ad858456416a4fa97e9f9270a5e14 Started:0xc00050d73f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0000d03c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 17 lines ... Jan 29 08:35:57.148: INFO: >>> kubeConfig: /root/tmp2938072239/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:35:57.149[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:35:57.149[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:35:57.21[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:35:57.21[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:35:57.272[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:35:57.334[0m Jan 29 08:35:57.334: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-j4sz7" in namespace "azuredisk-5894" to be "Succeeded or Failed" Jan 29 08:35:57.392: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.06263ms Jan 29 08:35:59.451: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117112825s Jan 29 08:36:01.450: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116571812s Jan 29 08:36:03.450: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116744008s Jan 29 08:36:05.451: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117559971s Jan 29 08:36:07.452: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118353364s ... skipping 2 lines ... Jan 29 08:36:13.452: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117836585s Jan 29 08:36:15.451: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117548435s Jan 29 08:36:17.450: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116634936s Jan 29 08:36:19.452: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118759548s Jan 29 08:36:21.453: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119281577s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:36:21.453[0m Jan 29 08:36:21.453: INFO: Pod "azuredisk-volume-tester-j4sz7" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 08:36:21.453[0m 2023/01/29 08:36:21 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/29 08:36:21 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-biyqdrb7 [38;5;243m01/29/23 08:36:21.454[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-01f9a369-9fb0-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:36:21.455[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-01f9a369-9fb0-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:36:22.258[0m ... skipping 12 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:36:39.624[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:36:39.685[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:36:39.685[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:36:39.748[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/29/23 08:36:39.748[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 08:36:39.748[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:36:39.808[0m Jan 29 08:36:39.808: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7rxsc" in namespace "azuredisk-5894" to be "Succeeded or Failed" Jan 29 08:36:39.866: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 57.589007ms Jan 29 08:36:41.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116803034s Jan 29 08:36:43.926: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117593834s Jan 29 08:36:45.927: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118993677s Jan 29 08:36:47.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117212935s Jan 29 08:36:49.926: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118409444s Jan 29 08:36:51.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.116664724s Jan 29 08:36:53.927: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11896364s Jan 29 08:36:55.932: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.124426226s Jan 29 08:36:57.927: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.119233561s Jan 29 08:36:59.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117490663s Jan 29 08:37:01.944: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.136291206s Jan 29 08:37:03.924: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Failed", Reason="", readiness=false. Elapsed: 24.116463981s Jan 29 08:37:03.925: INFO: Unexpected error: <*fmt.wrapError | 0xc000a00a40>: { msg: "error while waiting for pod azuredisk-5894/azuredisk-volume-tester-7rxsc to be Succeeded or Failed: pod \"azuredisk-volume-tester-7rxsc\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc000c356a0>{ s: "pod \"azuredisk-volume-tester-7rxsc\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 29 08:37:03.925: FAIL: error while waiting for pod azuredisk-5894/azuredisk-volume-tester-7rxsc to be Succeeded or Failed: pod "azuredisk-volume-tester-7rxsc" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000d2dd78, {0x270dda0, 0xc000a1a4e0}, {0x26f8fa0, 0xc000800140}, 0xc0000d7340?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 42 lines ... Jan 29 08:39:12.151: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5894 to be removed Jan 29 08:39:12.209: INFO: Claim "azuredisk-5894" in namespace "pvc-6jl4s" doesn't exist in the system Jan 29 08:39:12.209: INFO: deleting StorageClass azuredisk-5894-disk.csi.azure.com-dynamic-sc-g6wqc [1mSTEP:[0m dump namespace information after failure [38;5;243m01/29/23 08:39:12.27[0m [1mSTEP:[0m Destroying namespace "azuredisk-5894" for this suite. [38;5;243m01/29/23 08:39:12.271[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [196.097 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:747[0m ... skipping 7 lines ... Jan 29 08:35:57.148: INFO: >>> kubeConfig: /root/tmp2938072239/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:35:57.149[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:35:57.149[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:35:57.21[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:35:57.21[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:35:57.272[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:35:57.334[0m Jan 29 08:35:57.334: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-j4sz7" in namespace "azuredisk-5894" to be "Succeeded or Failed" Jan 29 08:35:57.392: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.06263ms Jan 29 08:35:59.451: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117112825s Jan 29 08:36:01.450: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116571812s Jan 29 08:36:03.450: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116744008s Jan 29 08:36:05.451: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117559971s Jan 29 08:36:07.452: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118353364s ... skipping 2 lines ... Jan 29 08:36:13.452: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117836585s Jan 29 08:36:15.451: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117548435s Jan 29 08:36:17.450: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116634936s Jan 29 08:36:19.452: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118759548s Jan 29 08:36:21.453: INFO: Pod "azuredisk-volume-tester-j4sz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119281577s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:36:21.453[0m Jan 29 08:36:21.453: INFO: Pod "azuredisk-volume-tester-j4sz7" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 08:36:21.453[0m [1mSTEP:[0m Prow test resource group: kubetest-biyqdrb7 [38;5;243m01/29/23 08:36:21.454[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-01f9a369-9fb0-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:36:21.455[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-01f9a369-9fb0-11ed-843a-6e0650d04a6b [38;5;243m01/29/23 08:36:22.258[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/29/23 08:36:22.258[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/29/23 08:36:22.258[0m ... skipping 10 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:36:39.624[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:36:39.685[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:36:39.685[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:36:39.748[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/29/23 08:36:39.748[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 08:36:39.748[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:36:39.808[0m Jan 29 08:36:39.808: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7rxsc" in namespace "azuredisk-5894" to be "Succeeded or Failed" Jan 29 08:36:39.866: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 57.589007ms Jan 29 08:36:41.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116803034s Jan 29 08:36:43.926: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117593834s Jan 29 08:36:45.927: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118993677s Jan 29 08:36:47.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117212935s Jan 29 08:36:49.926: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118409444s Jan 29 08:36:51.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.116664724s Jan 29 08:36:53.927: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11896364s Jan 29 08:36:55.932: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.124426226s Jan 29 08:36:57.927: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.119233561s Jan 29 08:36:59.925: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117490663s Jan 29 08:37:01.944: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.136291206s Jan 29 08:37:03.924: INFO: Pod "azuredisk-volume-tester-7rxsc": Phase="Failed", Reason="", readiness=false. Elapsed: 24.116463981s Jan 29 08:37:03.925: INFO: Unexpected error: <*fmt.wrapError | 0xc000a00a40>: { msg: "error while waiting for pod azuredisk-5894/azuredisk-volume-tester-7rxsc to be Succeeded or Failed: pod \"azuredisk-volume-tester-7rxsc\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc000c356a0>{ s: "pod \"azuredisk-volume-tester-7rxsc\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 29 08:37:03.925: FAIL: error while waiting for pod azuredisk-5894/azuredisk-volume-tester-7rxsc to be Succeeded or Failed: pod "azuredisk-volume-tester-7rxsc" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000d2dd78, {0x270dda0, 0xc000a1a4e0}, {0x26f8fa0, 0xc000800140}, 0xc0000d7340?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 43 lines ... Jan 29 08:39:12.209: INFO: Claim "azuredisk-5894" in namespace "pvc-6jl4s" doesn't exist in the system Jan 29 08:39:12.209: INFO: deleting StorageClass azuredisk-5894-disk.csi.azure.com-dynamic-sc-g6wqc [1mSTEP:[0m dump namespace information after failure [38;5;243m01/29/23 08:39:12.27[0m [1mSTEP:[0m Destroying namespace "azuredisk-5894" for this suite. [38;5;243m01/29/23 08:39:12.271[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 29 08:37:03.925: error while waiting for pod azuredisk-5894/azuredisk-volume-tester-7rxsc to be Succeeded or Failed: pod "azuredisk-volume-tester-7rxsc" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 08:36:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.48 PodIPs:[{IP:10.248.0.48}] StartTime:2023-01-29 08:36:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 08:37:01 +0000 UTC,FinishedAt:2023-01-29 08:37:01 +0000 UTC,ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://a491140346c3f5f46ae2cb2571d3dcc35519b2df3a3a6368d7ed692fd1606787 Started:0xc000cef7a0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0000d03c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 25 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:39:13.421[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:39:13.48[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:39:13.48[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:39:13.539[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:39:13.54[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:39:13.601[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:39:13.662[0m Jan 29 08:39:13.662: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5qm2s" in namespace "azuredisk-493" to be "Succeeded or Failed" Jan 29 08:39:13.720: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 58.246706ms Jan 29 08:39:15.779: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11763667s Jan 29 08:39:17.779: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116837806s Jan 29 08:39:19.781: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118973394s Jan 29 08:39:21.780: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118140836s Jan 29 08:39:23.782: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120229067s ... skipping 10 lines ... Jan 29 08:39:45.784: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 32.12211978s Jan 29 08:39:47.781: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 34.119237059s Jan 29 08:39:49.781: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119425643s Jan 29 08:39:51.778: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 38.116674839s Jan 29 08:39:53.777: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.115471442s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:39:53.777[0m Jan 29 08:39:53.777: INFO: Pod "azuredisk-volume-tester-5qm2s" satisfied condition "Succeeded or Failed" Jan 29 08:39:53.777: INFO: deleting Pod "azuredisk-493"/"azuredisk-volume-tester-5qm2s" Jan 29 08:39:53.837: INFO: Pod azuredisk-volume-tester-5qm2s has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-5qm2s in namespace azuredisk-493 [38;5;243m01/29/23 08:39:53.837[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:39:53.956[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:39:54.014[0m ... skipping 70 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:39:13.421[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:39:13.48[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:39:13.48[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:39:13.539[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:39:13.54[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:39:13.601[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:39:13.662[0m Jan 29 08:39:13.662: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5qm2s" in namespace "azuredisk-493" to be "Succeeded or Failed" Jan 29 08:39:13.720: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 58.246706ms Jan 29 08:39:15.779: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11763667s Jan 29 08:39:17.779: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116837806s Jan 29 08:39:19.781: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118973394s Jan 29 08:39:21.780: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118140836s Jan 29 08:39:23.782: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120229067s ... skipping 10 lines ... Jan 29 08:39:45.784: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 32.12211978s Jan 29 08:39:47.781: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 34.119237059s Jan 29 08:39:49.781: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119425643s Jan 29 08:39:51.778: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 38.116674839s Jan 29 08:39:53.777: INFO: Pod "azuredisk-volume-tester-5qm2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.115471442s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:39:53.777[0m Jan 29 08:39:53.777: INFO: Pod "azuredisk-volume-tester-5qm2s" satisfied condition "Succeeded or Failed" Jan 29 08:39:53.777: INFO: deleting Pod "azuredisk-493"/"azuredisk-volume-tester-5qm2s" Jan 29 08:39:53.837: INFO: Pod azuredisk-volume-tester-5qm2s has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-5qm2s in namespace azuredisk-493 [38;5;243m01/29/23 08:39:53.837[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:39:53.956[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:39:54.014[0m ... skipping 938 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:53:33.136[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:53:33.136[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:53:33.195[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:53:33.195[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:53:33.255[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:53:33.255[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:53:33.319[0m Jan 29 08:53:33.319: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-krg9d" in namespace "azuredisk-6629" to be "Succeeded or Failed" Jan 29 08:53:33.376: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.32938ms Jan 29 08:53:35.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113891843s Jan 29 08:53:37.436: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116878003s Jan 29 08:53:39.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113881873s Jan 29 08:53:41.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114256999s Jan 29 08:53:43.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114098428s ... skipping 2 lines ... Jan 29 08:53:49.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.113857351s Jan 29 08:53:51.434: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114537929s Jan 29 08:53:53.434: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.114742452s Jan 29 08:53:55.434: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114957674s Jan 29 08:53:57.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113865775s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:53:57.433[0m Jan 29 08:53:57.434: INFO: Pod "azuredisk-volume-tester-krg9d" satisfied condition "Succeeded or Failed" Jan 29 08:53:57.434: INFO: deleting Pod "azuredisk-6629"/"azuredisk-volume-tester-krg9d" Jan 29 08:53:57.520: INFO: Pod azuredisk-volume-tester-krg9d has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-krg9d in namespace azuredisk-6629 [38;5;243m01/29/23 08:53:57.52[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:53:57.642[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:53:57.704[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 08:53:33.136[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 08:53:33.136[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 08:53:33.195[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 08:53:33.195[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 08:53:33.255[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 08:53:33.255[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 08:53:33.319[0m Jan 29 08:53:33.319: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-krg9d" in namespace "azuredisk-6629" to be "Succeeded or Failed" Jan 29 08:53:33.376: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.32938ms Jan 29 08:53:35.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113891843s Jan 29 08:53:37.436: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116878003s Jan 29 08:53:39.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113881873s Jan 29 08:53:41.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114256999s Jan 29 08:53:43.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114098428s ... skipping 2 lines ... Jan 29 08:53:49.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.113857351s Jan 29 08:53:51.434: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114537929s Jan 29 08:53:53.434: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.114742452s Jan 29 08:53:55.434: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114957674s Jan 29 08:53:57.433: INFO: Pod "azuredisk-volume-tester-krg9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113865775s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 08:53:57.433[0m Jan 29 08:53:57.434: INFO: Pod "azuredisk-volume-tester-krg9d" satisfied condition "Succeeded or Failed" Jan 29 08:53:57.434: INFO: deleting Pod "azuredisk-6629"/"azuredisk-volume-tester-krg9d" Jan 29 08:53:57.520: INFO: Pod azuredisk-volume-tester-krg9d has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-krg9d in namespace azuredisk-6629 [38;5;243m01/29/23 08:53:57.52[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 08:53:57.642[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 08:53:57.704[0m ... skipping 95 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 08:03:42.257297 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 e2e-test I0129 08:03:42.257935 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:03:42.294478 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:03:42.294504 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:03:42.294513 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:03:42.294541 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:03:42.295570 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:03:42.295626 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:03:42.295658 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0129 08:03:42.296022 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0129 08:03:42.296041 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0129 08:03:42.296049 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0129 08:03:42.296133 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0129 08:03:42.296156 1 azuredisk.go:193] disable UseInstanceMetadata for controller I0129 08:03:42.296165 1 azuredisk.go:205] cloud: AzurePublicCloud, location: westus2, rg: kubetest-biyqdrb7, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-31899273-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0129 08:03:42.299976 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount788239257' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount788239257: must be superuser to unmount. I0129 08:03:42.300002 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0129 08:03:42.300076 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0129 08:03:42.300090 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0129 08:03:42.300097 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0129 08:03:42.300103 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0129 08:03:42.300109 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 68 lines ... I0129 08:03:51.516452 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 08:03:51.595523 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 32357 I0129 08:03:51.599369 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5f817114-40a5-465a-b857-6a354958b011. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:03:51.599455 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011 to node k8s-agentpool-31899273-vmss000000 I0129 08:03:51.599545 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011:%!s(*provider.AttachDiskOptions=&{None pvc-5f817114-40a5-465a-b857-6a354958b011 false 0})] I0129 08:03:51.599636 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011:%!s(*provider.AttachDiskOptions=&{None pvc-5f817114-40a5-465a-b857-6a354958b011 false 0})]) I0129 08:03:52.385386 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011:%!s(*provider.AttachDiskOptions=&{None pvc-5f817114-40a5-465a-b857-6a354958b011 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:04:02.521208 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:04:02.521249 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:04:02.521269 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:04:02.521286 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:04:02.521334 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.109590846 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:04:02.521360 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0129 08:04:57.024361 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011) returned with <nil> I0129 08:04:57.024409 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.213305722 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5f817114-40a5-465a-b857-6a354958b011" result_code="succeeded" I0129 08:04:57.024429 1 utils.go:84] GRPC response: {} I0129 08:05:02.539325 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0129 08:05:02.539357 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba","csi.storage.k8s.io/pvc/name":"pvc-x5lv5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","networkAccessPolicy":"DenyAll","skuName":"Standard_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0129 08:05:02.540116 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:05:02.546032 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:05:02.546062 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:05:02.546075 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:05:02.546103 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:05:02.546587 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:05:02.546635 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:05:02.546654 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0129 08:05:06.931666 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba","csi.storage.k8s.io/pvc/name":"pvc-x5lv5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","enableasyncattach":"false","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba"} I0129 08:05:06.975291 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1218 I0129 08:05:06.975657 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:05:06.975697 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba to node k8s-agentpool-31899273-vmss000001 I0129 08:05:06.975743 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba false 0})] I0129 08:05:06.975803 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba false 0})]) I0129 08:05:07.159465 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:05:17.270808 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:05:17.270893 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:05:17.270938 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba attached to node k8s-agentpool-31899273-vmss000001. I0129 08:05:17.270954 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba to node k8s-agentpool-31899273-vmss000001 successfully I0129 08:05:17.271029 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.295346731 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" I0129 08:05:17.271051 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0129 08:06:05.514365 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba) returned with <nil> I0129 08:06:05.514409 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.238532083 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ef946c1c-035a-41dd-b9b6-975456e2a1ba" result_code="succeeded" I0129 08:06:05.514429 1 utils.go:84] GRPC response: {} I0129 08:06:11.082592 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0129 08:06:11.082759 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}]},"capacity_range":{"required_bytes":1099511627776},"name":"pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54","parameters":{"csi.storage.k8s.io/pv/name":"pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54","csi.storage.k8s.io/pvc/name":"pvc-tqt4g","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","perfProfile":"Basic","skuName":"Premium_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0129 08:06:11.084158 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:06:11.088832 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:06:11.088860 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:06:11.088871 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:06:11.088947 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:06:11.089571 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:06:11.089647 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:06:11.089684 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0129 08:06:14.118565 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54","csi.storage.k8s.io/pvc/name":"pvc-tqt4g","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","enableasyncattach":"false","perfProfile":"Basic","requestedsizegib":"1024","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54"} I0129 08:06:14.143346 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1338 I0129 08:06:14.143840 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:06:14.143921 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 to node k8s-agentpool-31899273-vmss000000 I0129 08:06:14.144069 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 false 0})] I0129 08:06:14.144233 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 false 0})]) I0129 08:06:14.291126 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:06:24.386820 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:06:24.386860 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:06:24.386883 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:06:24.386900 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:06:24.386943 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.243108141 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8ffd3dbb-b1c1-465a-8664-a9ea09086f54" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:06:24.386963 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 45 lines ... I0129 08:07:42.526867 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc","csi.storage.k8s.io/pvc/name":"pvc-9n7bq","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc"} I0129 08:07:42.548282 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:07:42.548705 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:07:42.548738 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc to node k8s-agentpool-31899273-vmss000000 I0129 08:07:42.548774 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc false 0})] I0129 08:07:42.548824 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc false 0})]) I0129 08:07:42.700598 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:07:57.822615 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:07:57.822662 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:07:57.822687 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc attached to node k8s-agentpool-31899273-vmss000000. I0129 08:07:57.822787 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:07:57.822848 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.274141877 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:07:57.822885 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0129 08:09:02.711923 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5","csi.storage.k8s.io/pvc/name":"pvc-xt8wp","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5"} I0129 08:09:02.733114 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:09:02.733562 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:09:02.733596 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 to node k8s-agentpool-31899273-vmss000000 I0129 08:09:02.733675 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 false 0})] I0129 08:09:02.733756 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 false 0})]) I0129 08:09:02.875233 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:09:18.002680 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:09:18.002804 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:09:18.002863 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:09:18.002900 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:09:18.003022 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.269410508 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:09:18.003071 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0129 08:10:12.428068 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a","csi.storage.k8s.io/pvc/name":"pvc-qg78b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5429","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a"} I0129 08:10:12.457058 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0129 08:10:12.457489 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:10:12.457544 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a to node k8s-agentpool-31899273-vmss000000 I0129 08:10:12.457824 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a false 0})] I0129 08:10:12.457875 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a false 0})]) I0129 08:10:12.623776 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:10:22.755193 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:10:22.755236 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:10:22.755263 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a attached to node k8s-agentpool-31899273-vmss000000. I0129 08:10:22.755278 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:10:22.755330 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.297828686999999 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:10:22.755359 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 47 lines ... I0129 08:11:35.951704 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 to node k8s-agentpool-31899273-vmss000000 I0129 08:11:35.951769 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 false 0})] I0129 08:11:35.951841 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 false 0})]) I0129 08:11:35.968352 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0129 08:11:35.969427 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:11:35.969506 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc to node k8s-agentpool-31899273-vmss000000 I0129 08:11:37.037668 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:11:47.236792 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:11:47.236836 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:11:47.236865 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:11:47.236881 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:11:47.236926 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.285247221 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:11:47.236951 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0129 08:11:47.268213 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1466 I0129 08:11:47.268656 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 I0129 08:11:47.268711 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:11:47.268727 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557 is already attached to node k8s-agentpool-31899273-vmss000000 at lun 0. I0129 08:11:47.268852 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.0001459 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-89ec9be6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-ec8ca221-7c4d-4389-b0dd-bea7c0f08557" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:11:47.268923 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 08:11:47.396575 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b/providers/microsoft.compute/disks/pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:11:57.529218 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:11:57.529261 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:11:57.529286 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc attached to node k8s-agentpool-31899273-vmss000000. I0129 08:11:57.529371 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:11:57.529478 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.560036588 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-8a642555-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-5a5d29e0-c8ea-4a27-9904-55d94334a4dc" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:11:57.529541 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 62 lines ... I0129 08:14:09.433218 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1207 I0129 08:14:09.480284 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 08:14:09.483134 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f9925846-beff-47ec-b31a-0eb88cfa637e. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:14:09.483188 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e to node k8s-agentpool-31899273-vmss000000 I0129 08:14:09.483274 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f9925846-beff-47ec-b31a-0eb88cfa637e false 0})] I0129 08:14:09.483341 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f9925846-beff-47ec-b31a-0eb88cfa637e false 0})]) I0129 08:14:09.661804 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f9925846-beff-47ec-b31a-0eb88cfa637e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:14:44.927134 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:14:44.927171 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:14:44.927195 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e attached to node k8s-agentpool-31899273-vmss000000. I0129 08:14:44.927212 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:14:44.927257 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.493485674 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-f9925846-beff-47ec-b31a-0eb88cfa637e" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:14:44.927274 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0129 08:16:03.710826 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f","csi.storage.k8s.io/pvc/name":"pvc-8n66h","csi.storage.k8s.io/pvc/namespace":"azuredisk-8705","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f"} I0129 08:16:03.735779 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:16:03.736082 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:16:03.736132 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f to node k8s-agentpool-31899273-vmss000000 I0129 08:16:03.736168 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f false 0})] I0129 08:16:03.736208 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f false 0})]) I0129 08:16:03.941391 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:16:19.070890 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:16:19.070932 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:16:19.070957 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f attached to node k8s-agentpool-31899273-vmss000000. I0129 08:16:19.070997 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:16:19.071051 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.334967623 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-d20c540d-22fd-4f83-af24-8cf7e1648a3f" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:16:19.071089 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0129 08:16:28.001309 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-75830dda-3829-4a02-9d13-70cb249ea544","csi.storage.k8s.io/pvc/name":"pvc-hpdqv","csi.storage.k8s.io/pvc/namespace":"azuredisk-8705","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544"} I0129 08:16:28.048260 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:16:28.048631 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-75830dda-3829-4a02-9d13-70cb249ea544. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544 to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:16:28.048672 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544 to node k8s-agentpool-31899273-vmss000001 I0129 08:16:28.048713 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544 lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-75830dda-3829-4a02-9d13-70cb249ea544 false 0})] I0129 08:16:28.048762 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-75830dda-3829-4a02-9d13-70cb249ea544 false 0})]) I0129 08:16:28.237098 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-75830dda-3829-4a02-9d13-70cb249ea544 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:16:38.347523 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:16:38.347566 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:16:38.347609 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544 attached to node k8s-agentpool-31899273-vmss000001. I0129 08:16:38.347627 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544 to node k8s-agentpool-31899273-vmss000001 successfully I0129 08:16:38.347701 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.299049727 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-75830dda-3829-4a02-9d13-70cb249ea544" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" I0129 08:16:38.347718 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0129 08:16:52.371815 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea","csi.storage.k8s.io/pvc/name":"pvc-b7xj5","csi.storage.k8s.io/pvc/namespace":"azuredisk-8705","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea"} I0129 08:16:52.396562 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:16:52.397201 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:16:52.397238 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea to node k8s-agentpool-31899273-vmss000000 I0129 08:16:52.397282 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea lun 1 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea false 1})] I0129 08:16:52.397364 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea false 1})]) I0129 08:16:52.561948 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:17:02.725490 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:17:02.725535 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:17:02.725557 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea attached to node k8s-agentpool-31899273-vmss000000. I0129 08:17:02.725573 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:17:02.725827 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.328450425 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8076309d-d65c-4468-9d37-42d4c5a5c4ea" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:17:02.725881 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 83 lines ... I0129 08:20:29.596282 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-biyqdrb7:{}] I0129 08:20:29.623056 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0129 08:20:29.623376 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-80458a89-9a87-434e-bbab-afba1ce0d09f. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:20:29.623462 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f to node k8s-agentpool-31899273-vmss000000 I0129 08:20:29.623506 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-80458a89-9a87-434e-bbab-afba1ce0d09f false 0})] I0129 08:20:29.623549 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-80458a89-9a87-434e-bbab-afba1ce0d09f false 0})]) I0129 08:20:29.804233 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-80458a89-9a87-434e-bbab-afba1ce0d09f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:20:39.967566 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:20:39.967609 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:20:39.967774 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f attached to node k8s-agentpool-31899273-vmss000000. I0129 08:20:39.968140 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:20:39.968202 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.371897131 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80458a89-9a87-434e-bbab-afba1ce0d09f" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:20:39.968228 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 70 lines ... I0129 08:23:21.520691 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-39010e21-6bef-4989-8e3e-9602cb52ee86","csi.storage.k8s.io/pvc/name":"pvc-zxcrm","csi.storage.k8s.io/pvc/namespace":"azuredisk-9241","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86"} I0129 08:23:21.541511 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0129 08:23:21.541798 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-39010e21-6bef-4989-8e3e-9602cb52ee86. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:23:21.541828 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 to node k8s-agentpool-31899273-vmss000000 I0129 08:23:21.541867 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 false 0})] I0129 08:23:21.541917 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 false 0})]) I0129 08:23:21.686692 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:23:31.771363 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:23:31.771405 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:23:31.771431 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:23:31.771448 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:23:31.771506 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.229698889 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:23:31.771564 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 14 lines ... I0129 08:23:50.865906 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f to node k8s-agentpool-31899273-vmss000000 I0129 08:23:50.866122 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f lun 1 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-44c95826-60be-4bda-8c57-24bda4f73a6f false 1})] I0129 08:23:50.866370 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-44c95826-60be-4bda-8c57-24bda4f73a6f false 1})]) I0129 08:23:50.929180 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0129 08:23:50.929459 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86"} I0129 08:23:50.929613 1 controllerserver.go:471] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-39010e21-6bef-4989-8e3e-9602cb52ee86 from node k8s-agentpool-31899273-vmss000000 I0129 08:23:51.042210 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-44c95826-60be-4bda-8c57-24bda4f73a6f false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:24:01.169194 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:24:01.169246 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:24:01.169545 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f attached to node k8s-agentpool-31899273-vmss000000. I0129 08:24:01.169577 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:24:01.169704 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.30398476 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:24:01.169733 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 27 lines ... I0129 08:24:41.307790 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f from node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:pvc-44c95826-60be-4bda-8c57-24bda4f73a6f] E0129 08:24:41.307973 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-31899273-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:pvc-44c95826-60be-4bda-8c57-24bda4f73a6f]) not found I0129 08:24:41.307998 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:pvc-44c95826-60be-4bda-8c57-24bda4f73a6f]) I0129 08:24:43.255976 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 08:24:43.256002 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f"} I0129 08:24:43.256095 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f) I0129 08:24:43.256111 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f) since it's in attaching or detaching state I0129 08:24:43.256167 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.16e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f" result_code="failed_csi_driver_controller_delete_volume" E0129 08:24:43.256182 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f) since it's in attaching or detaching state I0129 08:24:46.657869 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f:pvc-44c95826-60be-4bda-8c57-24bda4f73a6f]) returned with <nil> I0129 08:24:46.657951 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:24:46.657970 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:24:46.657982 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-44c95826-60be-4bda-8c57-24bda4f73a6f, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f) succeeded I0129 08:24:46.657994 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f from node k8s-agentpool-31899273-vmss000000 successfully I0129 08:24:46.658054 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.350411961 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-44c95826-60be-4bda-8c57-24bda4f73a6f" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" ... skipping 28 lines ... I0129 08:25:38.042019 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c","csi.storage.k8s.io/pvc/name":"pvc-f28rz","csi.storage.k8s.io/pvc/namespace":"azuredisk-9336","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c"} I0129 08:25:38.069297 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 08:25:38.069637 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:25:38.069674 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c to node k8s-agentpool-31899273-vmss000000 I0129 08:25:38.069714 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c false 0})] I0129 08:25:38.069761 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c false 0})]) I0129 08:25:38.223150 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:25:48.371523 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:25:48.371559 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:25:48.371585 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c attached to node k8s-agentpool-31899273-vmss000000. I0129 08:25:48.371601 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:25:48.371646 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.302013008 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:25:48.371662 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 33 lines ... I0129 08:26:16.276178 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c) succeeded I0129 08:26:16.276212 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c from node k8s-agentpool-31899273-vmss000000 successfully I0129 08:26:16.276276 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.260332296 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ae9d1e51-2899-4ca9-891c-100077c9f88c" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:26:16.276299 1 utils.go:84] GRPC response: {} I0129 08:26:16.276413 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-80108312-9875-4b31-b5f4-69aef4212e3d false 0})] I0129 08:26:16.276475 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-80108312-9875-4b31-b5f4-69aef4212e3d false 0})]) I0129 08:26:16.473185 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-80108312-9875-4b31-b5f4-69aef4212e3d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:26:26.599539 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:26:26.599580 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:26:26.599607 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d attached to node k8s-agentpool-31899273-vmss000000. I0129 08:26:26.599623 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:26:26.599672 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=19.138380857 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-80108312-9875-4b31-b5f4-69aef4212e3d" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:26:26.599699 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 70 lines ... I0129 08:27:40.597175 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:27:40.597502 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:27:40.597533 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e to node k8s-agentpool-31899273-vmss000000 I0129 08:27:40.598626 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:27:40.598860 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:27:40.598892 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b to node k8s-agentpool-31899273-vmss000000 I0129 08:27:41.486474 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-34f7b769-fabe-4a7d-84a4-008b1e5c5df4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-34f7b769-fabe-4a7d-84a4-008b1e5c5df4 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:28:16.731268 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:28:16.731308 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:28:16.731342 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-34f7b769-fabe-4a7d-84a4-008b1e5c5df4 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:28:16.731391 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-34f7b769-fabe-4a7d-84a4-008b1e5c5df4 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:28:16.731437 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=36.150866151 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-34f7b769-fabe-4a7d-84a4-008b1e5c5df4" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:28:16.731455 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 08:28:16.731685 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e lun 1 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e false 1})] I0129 08:28:16.731788 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e false 1})]) I0129 08:28:16.948629 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:28:27.052940 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:28:27.052978 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:28:27.053014 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e attached to node k8s-agentpool-31899273-vmss000000. I0129 08:28:27.053032 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:28:27.053126 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b lun 2 to node k8s-agentpool-31899273-vmss000000, diskMap: map[] I0129 08:28:27.053149 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b attached to node k8s-agentpool-31899273-vmss000000. ... skipping 64 lines ... I0129 08:29:26.980209 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e:pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e]) I0129 08:29:26.980106 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.205165755 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-9fd37058-65d7-4547-9792-a3f0c9e8399b" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:29:26.980537 1 utils.go:84] GRPC response: {} I0129 08:29:30.774327 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 08:29:30.774555 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e"} I0129 08:29:30.774688 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) I0129 08:29:30.774817 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) since it's in attaching or detaching state I0129 08:29:30.774884 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.000154701 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e" result_code="failed_csi_driver_controller_delete_volume" E0129 08:29:30.774913 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) since it's in attaching or detaching state I0129 08:29:31.775782 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 08:29:31.775815 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e"} I0129 08:29:31.775915 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) I0129 08:29:31.775931 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) since it's in attaching or detaching state I0129 08:29:31.775987 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.3601e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e" result_code="failed_csi_driver_controller_delete_volume" E0129 08:29:31.776003 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) since it's in attaching or detaching state I0129 08:29:32.205526 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e:pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e]) returned with <nil> I0129 08:29:32.205705 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:29:32.205780 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:29:32.205818 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e) succeeded I0129 08:29:32.205865 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e from node k8s-agentpool-31899273-vmss000000 successfully I0129 08:29:32.205935 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.428944896 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-efa24a07-87b4-4ad5-af23-ce4b71ced14e" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" ... skipping 43 lines ... I0129 08:29:55.866454 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:29:55.866493 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 to node k8s-agentpool-31899273-vmss000000 I0129 08:29:55.866626 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 false 0})] I0129 08:29:55.866820 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 false 0})]) I0129 08:29:55.866958 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2becc751-f9ec-402a-9995-deab3f877bd0. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:29:55.867109 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 to node k8s-agentpool-31899273-vmss000000 I0129 08:29:56.113749 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:30:31.338348 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:30:31.338671 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:30:31.338724 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:30:31.338744 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:30:31.338884 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.472364297 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:30:31.338853 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 lun 1 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2becc751-f9ec-402a-9995-deab3f877bd0 false 1})] I0129 08:30:31.339021 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 08:30:31.339160 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2becc751-f9ec-402a-9995-deab3f877bd0 false 1})]) I0129 08:30:31.567349 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2becc751-f9ec-402a-9995-deab3f877bd0 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:30:41.700419 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:30:41.700459 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:30:41.700484 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:30:41.700501 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:30:41.700556 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=45.833695373 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:30:41.700577 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 45 lines ... I0129 08:31:42.055060 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 from node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:pvc-2becc751-f9ec-402a-9995-deab3f877bd0] E0129 08:31:42.055099 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-31899273-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:pvc-2becc751-f9ec-402a-9995-deab3f877bd0]) not found I0129 08:31:42.055115 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:pvc-2becc751-f9ec-402a-9995-deab3f877bd0]) I0129 08:31:46.125004 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 08:31:46.125037 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0"} I0129 08:31:46.125160 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) I0129 08:31:46.125177 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) since it's in attaching or detaching state I0129 08:31:46.125239 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.9201e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0" result_code="failed_csi_driver_controller_delete_volume" E0129 08:31:46.125259 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) since it's in attaching or detaching state I0129 08:31:47.126245 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 08:31:47.126534 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0"} I0129 08:31:47.126654 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) I0129 08:31:47.126673 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) since it's in attaching or detaching state I0129 08:31:47.126890 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.73e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0" result_code="failed_csi_driver_controller_delete_volume" E0129 08:31:47.126916 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) since it's in attaching or detaching state I0129 08:31:47.244830 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0:pvc-2becc751-f9ec-402a-9995-deab3f877bd0]) returned with <nil> I0129 08:31:47.245100 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:31:47.245130 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:31:47.245144 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-2becc751-f9ec-402a-9995-deab3f877bd0, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0) succeeded I0129 08:31:47.245160 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0 from node k8s-agentpool-31899273-vmss000000 successfully I0129 08:31:47.245205 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.190423135 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2becc751-f9ec-402a-9995-deab3f877bd0" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" ... skipping 20 lines ... I0129 08:32:00.620591 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-14a956af-9f94-4582-9808-b928d2fa5f26","csi.storage.k8s.io/pvc/name":"pvc-wnrpv","csi.storage.k8s.io/pvc/namespace":"azuredisk-8591","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26"} I0129 08:32:00.652026 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:32:00.652380 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-14a956af-9f94-4582-9808-b928d2fa5f26. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:32:00.652423 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26 to node k8s-agentpool-31899273-vmss000000 I0129 08:32:00.652494 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14a956af-9f94-4582-9808-b928d2fa5f26 false 0})] I0129 08:32:00.652654 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14a956af-9f94-4582-9808-b928d2fa5f26 false 0})]) I0129 08:32:00.828902 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14a956af-9f94-4582-9808-b928d2fa5f26 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:32:36.150522 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:32:36.150620 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:32:36.150661 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:32:36.150746 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:32:36.151135 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.498411092 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:32:36.151195 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 34 lines ... I0129 08:33:29.030954 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-813436f2-04ba-43d8-b722-5100a0755ffe","csi.storage.k8s.io/pvc/name":"pvc-ql76p","csi.storage.k8s.io/pvc/namespace":"azuredisk-8591","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe"} I0129 08:33:29.059110 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0129 08:33:29.059632 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-813436f2-04ba-43d8-b722-5100a0755ffe. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:33:29.059672 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe to node k8s-agentpool-31899273-vmss000000 I0129 08:33:29.059741 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-813436f2-04ba-43d8-b722-5100a0755ffe false 0})] I0129 08:33:29.059897 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-813436f2-04ba-43d8-b722-5100a0755ffe false 0})]) I0129 08:33:29.198582 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-813436f2-04ba-43d8-b722-5100a0755ffe false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:33:39.319292 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:33:39.319348 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:33:39.319372 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe attached to node k8s-agentpool-31899273-vmss000000. I0129 08:33:39.319389 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:33:39.319438 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.259812615 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-813436f2-04ba-43d8-b722-5100a0755ffe" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:33:39.319465 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 57 lines ... I0129 08:36:00.446120 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0129 08:36:00.513463 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 08:36:00.515755 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8244529f-1d85-4a20-8211-574923e6ee84. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:36:00.515790 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84 to node k8s-agentpool-31899273-vmss000000 I0129 08:36:00.515932 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8244529f-1d85-4a20-8211-574923e6ee84 false 0})] I0129 08:36:00.515975 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8244529f-1d85-4a20-8211-574923e6ee84 false 0})]) I0129 08:36:00.735807 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8244529f-1d85-4a20-8211-574923e6ee84 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:36:10.870200 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:36:10.870261 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:36:10.870287 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:36:10.870304 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:36:10.870350 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.449377637 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8244529f-1d85-4a20-8211-574923e6ee84" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:36:10.870389 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 29 lines ... I0129 08:36:42.897399 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5","csi.storage.k8s.io/pvc/name":"pvc-jw6m7","csi.storage.k8s.io/pvc/namespace":"azuredisk-5894","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5"} I0129 08:36:42.921149 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0129 08:36:42.921498 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:36:42.921540 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 to node k8s-agentpool-31899273-vmss000001 I0129 08:36:42.921580 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 false 0})] I0129 08:36:42.921623 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 false 0})]) I0129 08:36:43.118539 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:36:53.229998 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:36:53.230042 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:36:53.230069 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 attached to node k8s-agentpool-31899273-vmss000001. I0129 08:36:53.230086 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5 to node k8s-agentpool-31899273-vmss000001 successfully I0129 08:36:53.230135 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.308638678 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2bab053f-3b75-4d85-8a63-02e0d262efb5" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" I0129 08:36:53.230153 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 86 lines ... I0129 08:39:16.743896 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1235 I0129 08:39:16.744280 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:39:16.744326 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767 to node k8s-agentpool-31899273-vmss000000 I0129 08:39:16.753132 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1220 I0129 08:39:16.753518 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8fe04aa1-083b-470f-91cb-71d4a986d088. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8fe04aa1-083b-470f-91cb-71d4a986d088 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:39:16.753548 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-8fe04aa1-083b-470f-91cb-71d4a986d088 to node k8s-agentpool-31899273-vmss000000 I0129 08:39:17.668582 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:39:27.764221 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:39:27.764260 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:39:27.764290 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:39:27.764306 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:39:27.764347 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.024893803 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:39:27.764367 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0129 08:39:27.838604 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1446 I0129 08:39:27.838957 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 I0129 08:39:27.839011 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:39:27.839030 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2 is already attached to node k8s-agentpool-31899273-vmss000000 at lun 0. I0129 08:39:27.839080 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.000124401 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-e4a5aa9e-1e63-46cf-a576-e62a38d13fd2" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:39:27.839104 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 08:39:27.926555 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-8fe04aa1-083b-470f-91cb-71d4a986d088:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8fe04aa1-083b-470f-91cb-71d4a986d088 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:39:38.024054 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:39:38.024094 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:39:38.024138 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:39:38.024155 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:39:38.024204 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.279907086 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-bb444b1b-8ea6-4ecd-8675-2e2a51c38767" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:39:38.024231 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 87 lines ... I0129 08:41:30.164032 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-5gv87-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-5710","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493"} I0129 08:41:30.188438 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 08:41:30.189405 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:41:30.189595 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 to node k8s-agentpool-31899273-vmss000000 I0129 08:41:30.190089 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 false 0})] I0129 08:41:30.190142 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 false 0})]) I0129 08:41:30.356589 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:41:40.447509 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:41:40.447558 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:41:40.447583 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:41:40.447612 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:41:40.447660 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.258318328 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:41:40.447688 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 63 lines ... I0129 08:44:22.784250 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-5gv87-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-5710","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493"} I0129 08:44:22.845969 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 08:44:22.846590 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:44:22.846627 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 to node k8s-agentpool-31899273-vmss000000 I0129 08:44:22.846707 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 false 0})] I0129 08:44:22.846959 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 false 0})]) I0129 08:44:23.041854 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:44:33.131431 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:44:33.131480 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:44:33.131504 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:44:33.131522 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:44:33.131570 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.284978388 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2f46d314-0a53-4e3c-a73f-96fbd88ec493" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:44:33.131613 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0129 08:44:59.131228 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0f62937a-357a-4a3d-880e-45ae5a99085a","csi.storage.k8s.io/pvc/name":"pvc-rvtw6","csi.storage.k8s.io/pvc/namespace":"azuredisk-9183","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com","tags":"disk=test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a"} I0129 08:44:59.174027 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0129 08:44:59.174370 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-0f62937a-357a-4a3d-880e-45ae5a99085a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:44:59.174399 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a to node k8s-agentpool-31899273-vmss000001 I0129 08:44:59.174454 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0f62937a-357a-4a3d-880e-45ae5a99085a false 0})] I0129 08:44:59.174496 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0f62937a-357a-4a3d-880e-45ae5a99085a false 0})]) I0129 08:44:59.361226 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0f62937a-357a-4a3d-880e-45ae5a99085a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:45:09.511971 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:45:09.512010 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:45:09.512052 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a attached to node k8s-agentpool-31899273-vmss000001. I0129 08:45:09.512070 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a to node k8s-agentpool-31899273-vmss000001 successfully I0129 08:45:09.512526 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.337769871 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-0f62937a-357a-4a3d-880e-45ae5a99085a" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" I0129 08:45:09.512610 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 54 lines ... I0129 08:46:41.476819 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:46:41.547052 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 08:46:41.550557 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:46:41.550598 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 to node k8s-agentpool-31899273-vmss000000 I0129 08:46:41.550643 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 false 0})] I0129 08:46:41.550747 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 false 0})]) I0129 08:46:41.750451 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:46:51.883899 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:46:51.883940 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:46:51.883964 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:46:51.883981 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:46:51.884027 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.406626957 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-27d45333-8d49-4fe3-a4b7-8ee816b346c1" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:46:51.884048 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 44 lines ... I0129 08:48:13.492600 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-eca57feb-96d7-43a4-869d-0795556feba1","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-8xjzt-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1968","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1"} I0129 08:48:13.516261 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 08:48:13.516621 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-eca57feb-96d7-43a4-869d-0795556feba1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:48:13.516755 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1 to node k8s-agentpool-31899273-vmss000000 I0129 08:48:13.516884 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-eca57feb-96d7-43a4-869d-0795556feba1 false 0})] I0129 08:48:13.517022 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-eca57feb-96d7-43a4-869d-0795556feba1 false 0})]) I0129 08:48:13.684489 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-eca57feb-96d7-43a4-869d-0795556feba1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:48:23.791196 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:48:23.791567 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:48:23.791759 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:48:23.791940 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:48:23.792273 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.275638853 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-eca57feb-96d7-43a4-869d-0795556feba1" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:48:23.792317 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0129 08:49:43.416359 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2279c771-bea4-41e1-abf3-816cfd691960","csi.storage.k8s.io/pvc/name":"pvc-vmgx6","csi.storage.k8s.io/pvc/namespace":"azuredisk-6720","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960"} I0129 08:49:43.438143 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 08:49:43.438490 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2279c771-bea4-41e1-abf3-816cfd691960. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960 to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:49:43.438526 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960 to node k8s-agentpool-31899273-vmss000001 I0129 08:49:43.438566 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960 lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2279c771-bea4-41e1-abf3-816cfd691960 false 0})] I0129 08:49:43.438612 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2279c771-bea4-41e1-abf3-816cfd691960 false 0})]) I0129 08:49:43.763975 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2279c771-bea4-41e1-abf3-816cfd691960 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:49:53.871687 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:49:53.871746 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:49:53.871770 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960 attached to node k8s-agentpool-31899273-vmss000001. I0129 08:49:53.871841 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960 to node k8s-agentpool-31899273-vmss000001 successfully I0129 08:49:53.871981 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.433409319 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-2279c771-bea4-41e1-abf3-816cfd691960" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" I0129 08:49:53.872001 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 55 lines ... I0129 08:51:45.311954 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-biyqdrb7:{}] I0129 08:51:45.357675 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0129 08:51:45.357804 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:51:45.357863 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to node k8s-agentpool-31899273-vmss000000 I0129 08:51:45.357995 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00:%!s(*provider.AttachDiskOptions=&{None pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 false 0})] I0129 08:51:45.358094 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00:%!s(*provider.AttachDiskOptions=&{None pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 false 0})]) I0129 08:51:45.512016 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00:%!s(*provider.AttachDiskOptions=&{None pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:51:45.556617 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0129 08:51:45.556646 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00","csi.storage.k8s.io/pvc/name":"pvc-wmcjz","csi.storage.k8s.io/pvc/namespace":"azuredisk-6829","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00"} I0129 08:51:45.599995 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0129 08:51:45.600440 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:51:45.601000 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to node k8s-agentpool-31899273-vmss000001 I0129 08:51:45.601091 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00:%!s(*provider.AttachDiskOptions=&{None pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 false 0})] I0129 08:51:45.601186 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00:%!s(*provider.AttachDiskOptions=&{None pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 false 0})]) I0129 08:51:45.769081 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00:%!s(*provider.AttachDiskOptions=&{None pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:51:55.639210 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:51:55.639245 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:51:55.639267 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:51:55.639283 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:51:55.639329 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.327350681 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:51:55.639347 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 08:51:55.824617 1 azure_armclient.go:291] Received error in WaitForAsyncOperationCompletion: 'Code="OperationNotAllowed" Message="Resource is being used by another operation." Target="1"' I0129 08:51:55.824706 1 azure_vmssvmclient.go:313] Received error in WaitForAsyncOperationResult: 'Code="OperationNotAllowed" Message="Resource is being used by another operation." Target="1"', no response I0129 08:51:55.824767 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully E0129 08:51:55.824803 1 controllerserver.go:429] Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to instance k8s-agentpool-31899273-vmss000001 failed with Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="OperationNotAllowed" Message="Resource is being used by another operation." Target="1" I0129 08:51:55.824857 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.224414837 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" node="k8s-agentpool-31899273-vmss000001" result_code="failed_csi_driver_controller_publish_volume" E0129 08:51:55.824881 1 utils.go:82] GRPC error: rpc error: code = Internal desc = Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to instance k8s-agentpool-31899273-vmss000001 failed with Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="OperationNotAllowed" Message="Resource is being used by another operation." Target="1" I0129 08:51:55.833234 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0129 08:51:55.833309 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00","csi.storage.k8s.io/pvc/name":"pvc-wmcjz","csi.storage.k8s.io/pvc/namespace":"azuredisk-6829","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00"} I0129 08:51:55.885552 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1692 I0129 08:51:55.886062 1 azure_vmss_cache.go:400] Node k8s-agentpool-31899273-vmss000001 has joined the cluster since the last VM cache refresh in NonVmssUniformNodesEntry, refreshing the cache I0129 08:51:55.886103 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-biyqdrb7:{}] I0129 08:51:55.909292 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0129 08:51:55.909357 1 azure_vmss.go:231] Couldn't find VMSS VM with nodeName k8s-agentpool-31899273-vmss000001, refreshing the cache(vmss: k8s-agentpool-31899273-vmss, rg: kubetest-biyqdrb7) I0129 08:51:56.017574 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 34407 I0129 08:51:56.024010 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:51:56.032173 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 to node k8s-agentpool-31899273-vmss000001 (vmState Failed). W0129 08:51:56.032207 1 controllerserver.go:392] VM(k8s-agentpool-31899273-vmss000001) is in failed state, update VM first I0129 08:51:56.032260 1 azure_controller_common.go:440] azureDisk - update: vm(k8s-agentpool-31899273-vmss000001) I0129 08:52:06.822264 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:52:06.822299 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:52:06.822316 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:52:06.822330 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 is already attached to node k8s-agentpool-31899273-vmss000001 at lun 0. I0129 08:52:06.822382 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.936302065 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" ... skipping 63 lines ... I0129 08:53:36.339378 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5477c32c-f483-4a84-a933-ea729e4e75d3","csi.storage.k8s.io/pvc/name":"pvc-69vhq","csi.storage.k8s.io/pvc/namespace":"azuredisk-6629","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3"} I0129 08:53:36.382865 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1222 I0129 08:53:36.383239 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5477c32c-f483-4a84-a933-ea729e4e75d3. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:53:36.383277 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 to node k8s-agentpool-31899273-vmss000000 I0129 08:53:36.383319 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 false 0})] I0129 08:53:36.383387 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 false 0})]) I0129 08:53:36.524646 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:53:46.674653 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:53:46.674722 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:53:46.674762 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 attached to node k8s-agentpool-31899273-vmss000000. I0129 08:53:46.674776 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3 to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:53:46.674856 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.291576018 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:53:46.674875 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0129 08:54:43.814675 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe"} I0129 08:54:43.837979 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0129 08:54:43.838433 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:54:43.838468 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe to node k8s-agentpool-31899273-vmss000000 I0129 08:54:43.838505 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe lun 0 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe false 0})] I0129 08:54:43.838550 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe false 0})]) I0129 08:54:43.982807 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:54:54.082662 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:54:54.082706 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:54:54.082727 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe attached to node k8s-agentpool-31899273-vmss000000. I0129 08:54:54.082773 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:54:54.082899 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.244407029 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-1e3552dd-4904-4e3f-9cd5-34796bb2befe" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:54:54.082917 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 10 lines ... I0129 08:55:08.903750 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac"} I0129 08:55:08.948977 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 08:55:08.949471 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac to node k8s-agentpool-31899273-vmss000001 (vmState Succeeded). I0129 08:55:08.949504 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac to node k8s-agentpool-31899273-vmss000001 I0129 08:55:08.949541 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac lun 0 to node k8s-agentpool-31899273-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac false 0})] I0129 08:55:08.949586 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac false 0})]) I0129 08:55:09.303069 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:55:19.480190 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000001) successfully I0129 08:55:19.480255 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000001) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:55:19.480280 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac attached to node k8s-agentpool-31899273-vmss000001. I0129 08:55:19.480335 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac to node k8s-agentpool-31899273-vmss000001 successfully I0129 08:55:19.480469 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.530979412 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac" node="k8s-agentpool-31899273-vmss000001" result_code="succeeded" I0129 08:55:19.480497 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0129 08:55:37.019442 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-31899273-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf"} I0129 08:55:37.050562 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1256 I0129 08:55:37.051004 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf to node k8s-agentpool-31899273-vmss000000 (vmState Succeeded). I0129 08:55:37.051104 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf to node k8s-agentpool-31899273-vmss000000 I0129 08:55:37.051184 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf lun 1 to node k8s-agentpool-31899273-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf false 1})] I0129 08:55:37.051349 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf false 1})]) I0129 08:55:37.277036 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-biyqdrb7): vm(k8s-agentpool-31899273-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-biyqdrb7/providers/microsoft.compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 08:55:47.377036 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-biyqdrb7, k8s-agentpool-31899273-vmss, k8s-agentpool-31899273-vmss000000) successfully I0129 08:55:47.377077 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-31899273-vmss, kubetest-biyqdrb7, k8s-agentpool-31899273-vmss000000) for cacheKey(kubetest-biyqdrb7/k8s-agentpool-31899273-vmss) updated successfully I0129 08:55:47.377100 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf attached to node k8s-agentpool-31899273-vmss000000. I0129 08:55:47.377116 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf to node k8s-agentpool-31899273-vmss000000 successfully I0129 08:55:47.377162 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.326188101 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-biyqdrb7" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-ced9f329-685f-4e99-bd08-af4a841a7dbf" node="k8s-agentpool-31899273-vmss000000" result_code="succeeded" I0129 08:55:47.377181 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 20 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 08:03:43.315434 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 e2e-test I0129 08:03:43.316012 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:03:43.359374 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:03:43.359403 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:03:43.359413 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:03:43.359445 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:03:43.360266 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:03:43.360326 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:03:43.360368 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0129 08:03:43.360778 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0129 08:03:43.360809 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0129 08:03:43.360818 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0129 08:03:43.360900 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0129 08:03:43.360925 1 azuredisk.go:193] disable UseInstanceMetadata for controller I0129 08:03:43.360935 1 azuredisk.go:205] cloud: AzurePublicCloud, location: westus2, rg: kubetest-biyqdrb7, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-31899273-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0129 08:03:43.364760 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount1549113955' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount1549113955: must be superuser to unmount. I0129 08:03:43.364788 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0129 08:03:43.364842 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0129 08:03:43.364850 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0129 08:03:43.364855 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0129 08:03:43.364859 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0129 08:03:43.364863 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 62 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 08:03:39.494716 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 e2e-test I0129 08:03:39.495483 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:03:39.526290 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:03:39.526335 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:03:39.526346 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:03:39.526377 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:03:39.527331 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:03:39.527391 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:03:39.527428 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 147 lines ... I0129 08:08:16.023693 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 08:08:16.039219 1 mount_linux.go:570] Output: "" I0129 08:08:16.039244 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/disk/azure/scsi1/lun0] I0129 08:08:16.549977 1 mount_linux.go:539] Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount I0129 08:08:16.550135 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount I0129 08:08:16.550329 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount) E0129 08:08:16.574370 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 08:08:16.574554 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 08:08:17.194829 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:08:17.194857 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc","csi.storage.k8s.io/pvc/name":"pvc-9n7bq","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc"} I0129 08:08:19.209435 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 08:08:19.209483 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 08:08:19.209965 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount with mount options([invalid mount options]) I0129 08:08:19.210004 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 08:08:19.224990 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0129 08:08:19.225034 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0129 08:08:19.255118 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount I0129 08:08:19.255160 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount) E0129 08:08:19.275664 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 08:08:19.275714 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 08:08:20.311440 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:08:20.311467 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc","csi.storage.k8s.io/pvc/name":"pvc-9n7bq","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc"} I0129 08:08:22.145460 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 08:08:22.145512 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 08:08:22.145900 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount with mount options([invalid mount options]) I0129 08:08:22.145929 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 08:08:22.156571 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0129 08:08:22.156601 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0129 08:08:22.171718 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount I0129 08:08:22.171756 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount) E0129 08:08:22.192286 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 08:08:22.192337 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-264edc7b-cc71-4330-82c3-f09b55fbbfbc/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 08:09:18.457526 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:09:18.457560 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5","csi.storage.k8s.io/pvc/name":"pvc-xt8wp","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5"} I0129 08:09:20.285328 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 08:09:20.285377 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 08:09:20.285395 1 utils.go:84] GRPC response: {} I0129 08:09:20.293001 1 utils.go:77] GRPC call: /csi.v1.Node/NodePublishVolume ... skipping 16 lines ... I0129 08:09:26.295122 1 utils.go:84] GRPC response: {} I0129 08:09:26.331661 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 08:09:26.331683 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5"} I0129 08:09:26.331756 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 I0129 08:09:26.331779 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 08:09:26.331797 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 I0129 08:09:26.334025 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 I0129 08:09:26.334048 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5" I0129 08:09:26.334155 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-7f7e7c91-2464-4683-912f-a853cfcbcde5 successfully I0129 08:09:26.334169 1 utils.go:84] GRPC response: {} I0129 08:10:28.140259 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:10:28.140290 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a","csi.storage.k8s.io/pvc/name":"pvc-qg78b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5429","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-57f77ff6-9fac-11ed-843a-6e0650d04a6b/providers/Microsoft.Compute/disks/pvc-a27fd6f0-d174-4c6d-82e9-971bcbd9966a"} I0129 08:10:29.965518 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 648 lines ... I0129 08:31:06.505941 1 utils.go:84] GRPC response: {} I0129 08:31:06.590999 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 08:31:06.591024 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9"} I0129 08:31:06.591100 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 I0129 08:31:06.591124 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 08:31:06.591138 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 I0129 08:31:06.593411 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 I0129 08:31:06.593435 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9" I0129 08:31:06.593544 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6d22f85e-bdd6-4148-ad33-9ec1adf760e9 successfully I0129 08:31:06.593572 1 utils.go:84] GRPC response: {} I0129 08:33:04.566068 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:33:04.566097 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-14a956af-9f94-4582-9808-b928d2fa5f26/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-14a956af-9f94-4582-9808-b928d2fa5f26","csi.storage.k8s.io/pvc/name":"pvc-wnrpv","csi.storage.k8s.io/pvc/namespace":"azuredisk-8591","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-14a956af-9f94-4582-9808-b928d2fa5f26"} I0129 08:33:06.360600 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 497 lines ... I0129 08:52:56.893953 1 utils.go:84] GRPC response: {} I0129 08:52:56.932765 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 08:52:56.932790 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00"} I0129 08:52:56.932851 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:52:56.932874 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 08:52:56.932887 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:52:56.937537 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:52:56.937565 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" I0129 08:52:56.937657 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 successfully I0129 08:52:56.937672 1 utils.go:84] GRPC response: {} I0129 08:53:52.198670 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:53:52.198697 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5477c32c-f483-4a84-a933-ea729e4e75d3","csi.storage.k8s.io/pvc/name":"pvc-69vhq","csi.storage.k8s.io/pvc/namespace":"azuredisk-6629","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-5477c32c-f483-4a84-a933-ea729e4e75d3"} I0129 08:53:54.072534 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 100 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 08:03:38.160997 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 e2e-test I0129 08:03:38.161717 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:03:38.191646 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:03:38.191672 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:03:38.191682 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:03:38.191786 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:03:38.193495 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:03:38.193921 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:03:38.193986 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 299 lines ... I0129 08:52:56.995047 1 utils.go:84] GRPC response: {} I0129 08:52:57.039771 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 08:52:57.039814 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00"} I0129 08:52:57.039892 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:52:57.039930 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 08:52:57.039944 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:52:57.042453 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 I0129 08:52:57.042490 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00" I0129 08:52:57.042587 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-6eff72cf-7970-4144-ba8b-6ea29994cc00 successfully I0129 08:52:57.042601 1 utils.go:84] GRPC response: {} I0129 08:55:24.643953 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 08:55:24.643980 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674979422640-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-biyqdrb7/providers/Microsoft.Compute/disks/pvc-cc42e71c-cb97-4bc1-8a6a-39670d08b4ac"} I0129 08:55:26.456644 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 33 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 08:03:36.464386 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-93a210d06a3c2f7f14a5b7d030e85f0e0d566e72 e2e-test I0129 08:03:36.465166 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 08:03:36.514691 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 08:03:36.514921 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 08:03:36.515124 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 08:03:36.515335 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 08:03:36.516634 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 08:03:36.516842 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 08:03:36.517060 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 666 lines ... cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="300"} 47 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="600"} 47 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="1200"} 47 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="+Inf"} 47 cloudprovider_azure_op_duration_seconds_sum{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 751.3017218270003 cloudprovider_azure_op_duration_seconds_count{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 47 # HELP cloudprovider_azure_op_failure_count [ALPHA] Number of failed Azure service operations # TYPE cloudprovider_azure_op_failure_count counter cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_delete_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 5 cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_publish_volume",resource_group="kubetest-biyqdrb7",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 1 # HELP disabled_metric_total [ALPHA] The count of disabled metrics. # TYPE disabled_metric_total counter disabled_metric_total 0 ... skipping 68 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 52636 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 47195 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 259 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 35953 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 4698 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 272 lines ... [AfterSuite] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:165[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 2 Failures:[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should create a pod, write and read to it, take a volume snapshot, and create another pod from the snapshot [disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[1mRan 26 of 66 Specs in 3821.574 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m24 Passed[0m | [38;5;9m[1m2 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m40 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (3821.58s) FAIL FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 3821.669s FAIL make: *** [Makefile:261: e2e-test] Error 1 2023/01/29 08:56:50 process.go:155: Step 'make e2e-test' finished in 1h5m27.758698308s 2023/01/29 08:56:50 aksengine_helpers.go:425: downloading /root/tmp2938072239/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/29 08:56:50 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/29 08:56:50 process.go:153: Running: chmod +x /root/tmp2938072239/log-dump.sh 2023/01/29 08:56:50 process.go:155: Step 'chmod +x /root/tmp2938072239/log-dump.sh' finished in 1.756591ms 2023/01/29 08:56:50 aksengine_helpers.go:425: downloading /root/tmp2938072239/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 63 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/29 08:57:26 process.go:155: Step 'bash -c /root/tmp2938072239/win-ci-logs-collector.sh kubetest-biyqdrb7.westus2.cloudapp.azure.com /root/tmp2938072239 /root/.ssh/id_rsa' finished in 4.807786ms 2023/01/29 08:57:26 aksengine.go:1141: Deleting resource group: kubetest-biyqdrb7. 2023/01/29 09:03:45 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/29 09:03:45 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/29 09:03:45 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 270.821486ms 2023/01/29 09:03:45 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 78da0861ff12 ... skipping 4 lines ...