Recent runs || View in Spyglass
PR | andyzhangx: doc: cut v1.26.2 release |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h18m |
Revision | 8a6d28d337dbc82a588cb71fef2adf7185187857 |
Refs |
1706 |
job-version | v1.27.0-alpha.1.88+7b243cef1a81f4 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.88+7b243cef1a81f4 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 107 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 197k 0 --:--:-- --:--:-- --:--:-- 197k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 not found: manifest unknown: manifest tagged by "v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=3d368a1217946b8b3c3bd47a4f8fe2de87227460 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2023-01-30T07:47:45Z -extldflags "-static"" -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1754 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 865 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 216 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 07:58:11.551[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 07:58:11.551[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 07:58:11.607[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 07:58:11.608[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 07:58:11.67[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 07:58:11.67[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 07:58:11.729[0m Jan 30 07:58:11.729: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hn6wx" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 30 07:58:11.792: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 62.588499ms Jan 30 07:58:13.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120723297s Jan 30 07:58:15.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120764042s Jan 30 07:58:17.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12088851s Jan 30 07:58:19.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121169554s Jan 30 07:58:21.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12107081s ... skipping 6 lines ... Jan 30 07:58:35.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 24.120138197s Jan 30 07:58:37.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 26.120001484s Jan 30 07:58:39.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 28.120894105s Jan 30 07:58:41.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 30.11990549s Jan 30 07:58:43.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.119891306s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 07:58:43.849[0m Jan 30 07:58:43.849: INFO: Pod "azuredisk-volume-tester-hn6wx" satisfied condition "Succeeded or Failed" Jan 30 07:58:43.849: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-hn6wx" Jan 30 07:58:43.947: INFO: Pod azuredisk-volume-tester-hn6wx has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hn6wx in namespace azuredisk-8081 [38;5;243m01/30/23 07:58:43.947[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 07:58:44.07[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 07:58:44.126[0m ... skipping 44 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 07:58:11.551[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 07:58:11.551[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 07:58:11.607[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 07:58:11.608[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 07:58:11.67[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 07:58:11.67[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 07:58:11.729[0m Jan 30 07:58:11.729: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hn6wx" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 30 07:58:11.792: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 62.588499ms Jan 30 07:58:13.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120723297s Jan 30 07:58:15.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120764042s Jan 30 07:58:17.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12088851s Jan 30 07:58:19.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121169554s Jan 30 07:58:21.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12107081s ... skipping 6 lines ... Jan 30 07:58:35.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 24.120138197s Jan 30 07:58:37.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 26.120001484s Jan 30 07:58:39.850: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 28.120894105s Jan 30 07:58:41.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Pending", Reason="", readiness=false. Elapsed: 30.11990549s Jan 30 07:58:43.849: INFO: Pod "azuredisk-volume-tester-hn6wx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.119891306s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 07:58:43.849[0m Jan 30 07:58:43.849: INFO: Pod "azuredisk-volume-tester-hn6wx" satisfied condition "Succeeded or Failed" Jan 30 07:58:43.849: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-hn6wx" Jan 30 07:58:43.947: INFO: Pod azuredisk-volume-tester-hn6wx has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hn6wx in namespace azuredisk-8081 [38;5;243m01/30/23 07:58:43.947[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 07:58:44.07[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 07:58:44.126[0m ... skipping 39 lines ... Jan 30 07:59:28.088: INFO: PersistentVolumeClaim pvc-f2t2l found but phase is Pending instead of Bound. Jan 30 07:59:30.148: INFO: PersistentVolumeClaim pvc-f2t2l found and phase=Bound (4.175706101s) [1mSTEP:[0m checking the PVC [38;5;243m01/30/23 07:59:30.148[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 07:59:30.21[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 07:59:30.271[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 07:59:30.272[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/30/23 07:59:30.334[0m Jan 30 07:59:30.334: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xd7rp" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 30 07:59:30.424: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 89.176492ms Jan 30 07:59:32.482: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147164539s Jan 30 07:59:34.484: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149148654s Jan 30 07:59:36.485: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150263605s Jan 30 07:59:38.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148800092s Jan 30 07:59:40.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148951491s Jan 30 07:59:42.481: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.146733633s Jan 30 07:59:44.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.148886309s Jan 30 07:59:46.482: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.147933063s Jan 30 07:59:48.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.148358487s Jan 30 07:59:50.484: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.149311072s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 07:59:50.484[0m Jan 30 07:59:50.484: INFO: Pod "azuredisk-volume-tester-xd7rp" satisfied condition "Succeeded or Failed" Jan 30 07:59:50.484: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-xd7rp" Jan 30 07:59:50.549: INFO: Pod azuredisk-volume-tester-xd7rp has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xd7rp in namespace azuredisk-2540 [38;5;243m01/30/23 07:59:50.55[0m Jan 30 07:59:50.620: INFO: deleting PVC "azuredisk-2540"/"pvc-f2t2l" Jan 30 07:59:50.620: INFO: Deleting PersistentVolumeClaim "pvc-f2t2l" ... skipping 38 lines ... Jan 30 07:59:28.088: INFO: PersistentVolumeClaim pvc-f2t2l found but phase is Pending instead of Bound. Jan 30 07:59:30.148: INFO: PersistentVolumeClaim pvc-f2t2l found and phase=Bound (4.175706101s) [1mSTEP:[0m checking the PVC [38;5;243m01/30/23 07:59:30.148[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 07:59:30.21[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 07:59:30.271[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 07:59:30.272[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/30/23 07:59:30.334[0m Jan 30 07:59:30.334: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xd7rp" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 30 07:59:30.424: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 89.176492ms Jan 30 07:59:32.482: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147164539s Jan 30 07:59:34.484: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149148654s Jan 30 07:59:36.485: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150263605s Jan 30 07:59:38.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148800092s Jan 30 07:59:40.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148951491s Jan 30 07:59:42.481: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.146733633s Jan 30 07:59:44.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.148886309s Jan 30 07:59:46.482: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.147933063s Jan 30 07:59:48.483: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.148358487s Jan 30 07:59:50.484: INFO: Pod "azuredisk-volume-tester-xd7rp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.149311072s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 07:59:50.484[0m Jan 30 07:59:50.484: INFO: Pod "azuredisk-volume-tester-xd7rp" satisfied condition "Succeeded or Failed" Jan 30 07:59:50.484: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-xd7rp" Jan 30 07:59:50.549: INFO: Pod azuredisk-volume-tester-xd7rp has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xd7rp in namespace azuredisk-2540 [38;5;243m01/30/23 07:59:50.55[0m Jan 30 07:59:50.620: INFO: deleting PVC "azuredisk-2540"/"pvc-f2t2l" Jan 30 07:59:50.620: INFO: Deleting PersistentVolumeClaim "pvc-f2t2l" ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:00:32.288[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:00:32.288[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:00:32.347[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:00:32.348[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:00:32.409[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:00:32.41[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:00:32.474[0m Jan 30 08:00:32.474: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-k7mn2" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 30 08:00:32.531: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 57.073923ms Jan 30 08:00:34.588: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11392103s Jan 30 08:00:36.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116096323s Jan 30 08:00:38.589: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11525268s Jan 30 08:00:40.591: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116598559s Jan 30 08:00:42.593: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118623846s ... skipping 15 lines ... Jan 30 08:01:14.589: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.115122201s Jan 30 08:01:16.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.116038937s Jan 30 08:01:18.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.115576661s Jan 30 08:01:20.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.116300677s Jan 30 08:01:22.592: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.118110253s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:01:22.592[0m Jan 30 08:01:22.593: INFO: Pod "azuredisk-volume-tester-k7mn2" satisfied condition "Succeeded or Failed" Jan 30 08:01:22.593: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-k7mn2" Jan 30 08:01:22.681: INFO: Pod azuredisk-volume-tester-k7mn2 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-k7mn2 in namespace azuredisk-4728 [38;5;243m01/30/23 08:01:22.681[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:01:22.804[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:01:22.861[0m ... skipping 44 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:00:32.288[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:00:32.288[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:00:32.347[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:00:32.348[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:00:32.409[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:00:32.41[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:00:32.474[0m Jan 30 08:00:32.474: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-k7mn2" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 30 08:00:32.531: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 57.073923ms Jan 30 08:00:34.588: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11392103s Jan 30 08:00:36.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116096323s Jan 30 08:00:38.589: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11525268s Jan 30 08:00:40.591: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116598559s Jan 30 08:00:42.593: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118623846s ... skipping 15 lines ... Jan 30 08:01:14.589: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.115122201s Jan 30 08:01:16.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.116038937s Jan 30 08:01:18.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.115576661s Jan 30 08:01:20.590: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.116300677s Jan 30 08:01:22.592: INFO: Pod "azuredisk-volume-tester-k7mn2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.118110253s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:01:22.592[0m Jan 30 08:01:22.593: INFO: Pod "azuredisk-volume-tester-k7mn2" satisfied condition "Succeeded or Failed" Jan 30 08:01:22.593: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-k7mn2" Jan 30 08:01:22.681: INFO: Pod azuredisk-volume-tester-k7mn2 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-k7mn2 in namespace azuredisk-4728 [38;5;243m01/30/23 08:01:22.681[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:01:22.804[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:01:22.861[0m ... skipping 45 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:03:00.435[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:03:00.435[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:03:00.496[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:03:00.496[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/30/23 08:03:00.559[0m Jan 30 08:03:22.675: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-zn86s" Jan 30 08:03:22.767: INFO: Error getting logs for pod azuredisk-volume-tester-zn86s: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-zn86s) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-zn86s in namespace azuredisk-5466 [38;5;243m01/30/23 08:03:22.767[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:03:22.885[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:03:22.942[0m Jan 30 08:03:22.942: INFO: deleting PVC "azuredisk-5466"/"pvc-69g2b" Jan 30 08:03:22.942: INFO: Deleting PersistentVolumeClaim "pvc-69g2b" [1mSTEP:[0m waiting for claim's PV "pvc-11934eb2-6d55-4056-a6ad-52e90632a93d" to be deleted [38;5;243m01/30/23 08:03:23.003[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:03:00.435[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:03:00.435[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:03:00.496[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:03:00.496[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/30/23 08:03:00.559[0m Jan 30 08:03:22.675: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-zn86s" Jan 30 08:03:22.767: INFO: Error getting logs for pod azuredisk-volume-tester-zn86s: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-zn86s) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-zn86s in namespace azuredisk-5466 [38;5;243m01/30/23 08:03:22.767[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:03:22.885[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:03:22.942[0m Jan 30 08:03:22.942: INFO: deleting PVC "azuredisk-5466"/"pvc-69g2b" Jan 30 08:03:22.942: INFO: Deleting PersistentVolumeClaim "pvc-69g2b" [1mSTEP:[0m waiting for claim's PV "pvc-11934eb2-6d55-4056-a6ad-52e90632a93d" to be deleted [38;5;243m01/30/23 08:03:23.003[0m ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:04:09.797[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:04:09.797[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:04:09.858[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:04:09.858[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:04:09.921[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:04:09.922[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:04:09.982[0m Jan 30 08:04:09.983: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-d6w2z" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 30 08:04:10.040: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 57.534027ms Jan 30 08:04:12.099: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116262399s Jan 30 08:04:14.103: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120695578s Jan 30 08:04:16.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118073528s Jan 30 08:04:18.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11810114s Jan 30 08:04:20.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118571798s ... skipping 2 lines ... Jan 30 08:04:26.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117995882s Jan 30 08:04:28.104: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 18.121504576s Jan 30 08:04:30.099: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115851215s Jan 30 08:04:32.099: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116432733s Jan 30 08:04:34.102: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118973697s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:04:34.102[0m Jan 30 08:04:34.102: INFO: Pod "azuredisk-volume-tester-d6w2z" satisfied condition "Succeeded or Failed" Jan 30 08:04:34.102: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-d6w2z" Jan 30 08:04:34.168: INFO: Pod azuredisk-volume-tester-d6w2z has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-d6w2z in namespace azuredisk-2790 [38;5;243m01/30/23 08:04:34.169[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:04:34.3[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:04:34.358[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:04:09.797[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:04:09.797[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:04:09.858[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:04:09.858[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:04:09.921[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:04:09.922[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:04:09.982[0m Jan 30 08:04:09.983: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-d6w2z" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 30 08:04:10.040: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 57.534027ms Jan 30 08:04:12.099: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116262399s Jan 30 08:04:14.103: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120695578s Jan 30 08:04:16.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118073528s Jan 30 08:04:18.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11810114s Jan 30 08:04:20.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118571798s ... skipping 2 lines ... Jan 30 08:04:26.101: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117995882s Jan 30 08:04:28.104: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 18.121504576s Jan 30 08:04:30.099: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115851215s Jan 30 08:04:32.099: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116432733s Jan 30 08:04:34.102: INFO: Pod "azuredisk-volume-tester-d6w2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118973697s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:04:34.102[0m Jan 30 08:04:34.102: INFO: Pod "azuredisk-volume-tester-d6w2z" satisfied condition "Succeeded or Failed" Jan 30 08:04:34.102: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-d6w2z" Jan 30 08:04:34.168: INFO: Pod azuredisk-volume-tester-d6w2z has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-d6w2z in namespace azuredisk-2790 [38;5;243m01/30/23 08:04:34.169[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:04:34.3[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:04:34.358[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:05:17.893[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:05:17.893[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:05:17.894[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:05:17.952[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:05:17.952[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:05:18.013[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:05:18.074[0m Jan 30 08:05:18.075: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gnjwz" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 30 08:05:18.132: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 57.685821ms Jan 30 08:05:20.190: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114970874s Jan 30 08:05:22.193: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118571781s Jan 30 08:05:24.191: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116632045s Jan 30 08:05:26.191: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116144918s Jan 30 08:05:28.191: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116693816s ... skipping 2 lines ... Jan 30 08:05:34.193: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118721666s Jan 30 08:05:36.192: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117438746s Jan 30 08:05:38.193: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118434768s Jan 30 08:05:40.192: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117409469s Jan 30 08:05:42.190: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115820128s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:05:42.19[0m Jan 30 08:05:42.191: INFO: Pod "azuredisk-volume-tester-gnjwz" satisfied condition "Succeeded or Failed" Jan 30 08:05:42.191: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-gnjwz" Jan 30 08:05:42.252: INFO: Pod azuredisk-volume-tester-gnjwz has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gnjwz in namespace azuredisk-5356 [38;5;243m01/30/23 08:05:42.252[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:05:42.374[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:05:42.433[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:05:17.893[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:05:17.893[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:05:17.894[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:05:17.952[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:05:17.952[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:05:18.013[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:05:18.074[0m Jan 30 08:05:18.075: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gnjwz" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 30 08:05:18.132: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 57.685821ms Jan 30 08:05:20.190: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114970874s Jan 30 08:05:22.193: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118571781s Jan 30 08:05:24.191: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116632045s Jan 30 08:05:26.191: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116144918s Jan 30 08:05:28.191: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116693816s ... skipping 2 lines ... Jan 30 08:05:34.193: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118721666s Jan 30 08:05:36.192: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117438746s Jan 30 08:05:38.193: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118434768s Jan 30 08:05:40.192: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117409469s Jan 30 08:05:42.190: INFO: Pod "azuredisk-volume-tester-gnjwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115820128s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:05:42.19[0m Jan 30 08:05:42.191: INFO: Pod "azuredisk-volume-tester-gnjwz" satisfied condition "Succeeded or Failed" Jan 30 08:05:42.191: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-gnjwz" Jan 30 08:05:42.252: INFO: Pod azuredisk-volume-tester-gnjwz has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gnjwz in namespace azuredisk-5356 [38;5;243m01/30/23 08:05:42.252[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:05:42.374[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:05:42.433[0m ... skipping 44 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:06:41.668[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:06:41.668[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:06:41.668[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:06:41.727[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:06:41.727[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:06:41.79[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:06:41.85[0m Jan 30 08:06:41.850: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-k2j9c" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 30 08:06:41.911: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 61.005791ms Jan 30 08:06:43.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120085875s Jan 30 08:06:45.971: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121178975s Jan 30 08:06:47.971: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121250035s Jan 30 08:06:49.971: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120912697s Jan 30 08:06:51.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119347077s ... skipping 10 lines ... Jan 30 08:07:13.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.119872427s Jan 30 08:07:15.973: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.122433458s Jan 30 08:07:17.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.11956496s Jan 30 08:07:19.969: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Running", Reason="", readiness=true. Elapsed: 38.118497856s Jan 30 08:07:21.969: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.118344979s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:07:21.969[0m Jan 30 08:07:21.969: INFO: Pod "azuredisk-volume-tester-k2j9c" satisfied condition "Succeeded or Failed" Jan 30 08:07:21.969: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-k2j9c" Jan 30 08:07:22.064: INFO: Pod azuredisk-volume-tester-k2j9c has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-k2j9c in namespace azuredisk-5194 [38;5;243m01/30/23 08:07:22.064[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:07:22.203[0m ... skipping 57 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:06:41.668[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:06:41.668[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:06:41.668[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:06:41.727[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:06:41.727[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:06:41.79[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:06:41.85[0m Jan 30 08:06:41.850: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-k2j9c" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 30 08:06:41.911: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 61.005791ms Jan 30 08:06:43.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120085875s Jan 30 08:06:45.971: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121178975s Jan 30 08:06:47.971: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121250035s Jan 30 08:06:49.971: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120912697s Jan 30 08:06:51.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119347077s ... skipping 10 lines ... Jan 30 08:07:13.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.119872427s Jan 30 08:07:15.973: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.122433458s Jan 30 08:07:17.970: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.11956496s Jan 30 08:07:19.969: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Running", Reason="", readiness=true. Elapsed: 38.118497856s Jan 30 08:07:21.969: INFO: Pod "azuredisk-volume-tester-k2j9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.118344979s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:07:21.969[0m Jan 30 08:07:21.969: INFO: Pod "azuredisk-volume-tester-k2j9c" satisfied condition "Succeeded or Failed" Jan 30 08:07:21.969: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-k2j9c" Jan 30 08:07:22.064: INFO: Pod azuredisk-volume-tester-k2j9c has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-k2j9c in namespace azuredisk-5194 [38;5;243m01/30/23 08:07:22.064[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:07:22.203[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:08:47.379[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:08:47.379[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:08:47.438[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:08:47.439[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:08:47.499[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:08:47.499[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/30/23 08:08:47.559[0m Jan 30 08:08:47.559: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-lb874" in namespace "azuredisk-1353" to be "Error status code" Jan 30 08:08:47.617: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 57.897038ms Jan 30 08:08:49.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117059529s Jan 30 08:08:51.675: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116132068s Jan 30 08:08:53.675: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11567153s Jan 30 08:08:55.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11657261s Jan 30 08:08:57.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116421084s ... skipping 24 lines ... Jan 30 08:09:47.677: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.118070628s Jan 30 08:09:49.677: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.117891164s Jan 30 08:09:51.677: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.11731736s Jan 30 08:09:53.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.117269228s Jan 30 08:09:55.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.116470557s Jan 30 08:09:57.675: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.116119526s Jan 30 08:09:59.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.116639348s [1mSTEP:[0m Saw pod failure [38;5;243m01/30/23 08:09:59.676[0m Jan 30 08:09:59.676: INFO: Pod "azuredisk-volume-tester-lb874" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/30/23 08:09:59.676[0m Jan 30 08:09:59.771: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-lb874" Jan 30 08:09:59.833: INFO: Pod azuredisk-volume-tester-lb874 has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-lb874 in namespace azuredisk-1353 [38;5;243m01/30/23 08:09:59.833[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:09:59.97[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:08:47.379[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:08:47.379[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:08:47.438[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:08:47.439[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:08:47.499[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:08:47.499[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/30/23 08:08:47.559[0m Jan 30 08:08:47.559: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-lb874" in namespace "azuredisk-1353" to be "Error status code" Jan 30 08:08:47.617: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 57.897038ms Jan 30 08:08:49.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117059529s Jan 30 08:08:51.675: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116132068s Jan 30 08:08:53.675: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11567153s Jan 30 08:08:55.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11657261s Jan 30 08:08:57.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116421084s ... skipping 24 lines ... Jan 30 08:09:47.677: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.118070628s Jan 30 08:09:49.677: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.117891164s Jan 30 08:09:51.677: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.11731736s Jan 30 08:09:53.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.117269228s Jan 30 08:09:55.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.116470557s Jan 30 08:09:57.675: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.116119526s Jan 30 08:09:59.676: INFO: Pod "azuredisk-volume-tester-lb874": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.116639348s [1mSTEP:[0m Saw pod failure [38;5;243m01/30/23 08:09:59.676[0m Jan 30 08:09:59.676: INFO: Pod "azuredisk-volume-tester-lb874" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/30/23 08:09:59.676[0m Jan 30 08:09:59.771: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-lb874" Jan 30 08:09:59.833: INFO: Pod azuredisk-volume-tester-lb874 has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-lb874 in namespace azuredisk-1353 [38;5;243m01/30/23 08:09:59.833[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:09:59.97[0m ... skipping 665 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:18:25.175[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:18:25.175[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:18:25.234[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:18:25.234[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:18:25.301[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:18:25.305[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:18:25.375[0m Jan 30 08:18:25.376: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g5dhj" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 30 08:18:25.432: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 56.707413ms Jan 30 08:18:27.504: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12872636s Jan 30 08:18:29.492: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116795972s Jan 30 08:18:31.490: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114812133s Jan 30 08:18:33.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115659537s Jan 30 08:18:35.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115541509s ... skipping 2 lines ... Jan 30 08:18:41.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115220137s Jan 30 08:18:43.490: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114302601s Jan 30 08:18:45.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115840858s Jan 30 08:18:47.490: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11487147s Jan 30 08:18:49.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115085489s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:18:49.491[0m Jan 30 08:18:49.491: INFO: Pod "azuredisk-volume-tester-g5dhj" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/30/23 08:18:49.491[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/30/23 08:18:54.491[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:18:54.607[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:18:54.607[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:18:54.669[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/30/23 08:18:54.669[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:18:54.728[0m Jan 30 08:18:54.728: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-n879d" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 30 08:18:54.786: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 57.21285ms Jan 30 08:18:56.845: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116105577s Jan 30 08:18:58.844: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115104923s Jan 30 08:19:00.845: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11630603s Jan 30 08:19:02.846: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117115005s Jan 30 08:19:04.843: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114284056s ... skipping 16 lines ... Jan 30 08:19:38.844: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.115289371s Jan 30 08:19:40.843: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.114996678s Jan 30 08:19:42.845: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.116934952s Jan 30 08:19:44.843: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Running", Reason="", readiness=true. Elapsed: 50.11499237s Jan 30 08:19:46.844: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.116063912s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:19:46.845[0m Jan 30 08:19:46.845: INFO: Pod "azuredisk-volume-tester-n879d" satisfied condition "Succeeded or Failed" Jan 30 08:19:46.845: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-n879d" Jan 30 08:19:46.935: INFO: Pod azuredisk-volume-tester-n879d has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-n879d in namespace azuredisk-59 [38;5;243m01/30/23 08:19:46.936[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:19:47.072[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:19:47.128[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:18:25.175[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:18:25.175[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:18:25.234[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:18:25.234[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:18:25.301[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:18:25.305[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:18:25.375[0m Jan 30 08:18:25.376: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g5dhj" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 30 08:18:25.432: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 56.707413ms Jan 30 08:18:27.504: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12872636s Jan 30 08:18:29.492: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116795972s Jan 30 08:18:31.490: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114812133s Jan 30 08:18:33.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115659537s Jan 30 08:18:35.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115541509s ... skipping 2 lines ... Jan 30 08:18:41.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115220137s Jan 30 08:18:43.490: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114302601s Jan 30 08:18:45.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115840858s Jan 30 08:18:47.490: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11487147s Jan 30 08:18:49.491: INFO: Pod "azuredisk-volume-tester-g5dhj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115085489s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:18:49.491[0m Jan 30 08:18:49.491: INFO: Pod "azuredisk-volume-tester-g5dhj" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/30/23 08:18:49.491[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/30/23 08:18:54.491[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:18:54.607[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:18:54.607[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:18:54.669[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/30/23 08:18:54.669[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:18:54.728[0m Jan 30 08:18:54.728: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-n879d" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 30 08:18:54.786: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 57.21285ms Jan 30 08:18:56.845: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116105577s Jan 30 08:18:58.844: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115104923s Jan 30 08:19:00.845: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11630603s Jan 30 08:19:02.846: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117115005s Jan 30 08:19:04.843: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114284056s ... skipping 16 lines ... Jan 30 08:19:38.844: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.115289371s Jan 30 08:19:40.843: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.114996678s Jan 30 08:19:42.845: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.116934952s Jan 30 08:19:44.843: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Running", Reason="", readiness=true. Elapsed: 50.11499237s Jan 30 08:19:46.844: INFO: Pod "azuredisk-volume-tester-n879d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.116063912s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:19:46.845[0m Jan 30 08:19:46.845: INFO: Pod "azuredisk-volume-tester-n879d" satisfied condition "Succeeded or Failed" Jan 30 08:19:46.845: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-n879d" Jan 30 08:19:46.935: INFO: Pod azuredisk-volume-tester-n879d has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-n879d in namespace azuredisk-59 [38;5;243m01/30/23 08:19:46.936[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:19:47.072[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:19:47.128[0m ... skipping 46 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:20:39.391[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:20:39.391[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:20:39.45[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:20:39.451[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:20:39.511[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:20:39.511[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:20:39.571[0m Jan 30 08:20:39.571: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-f4cnn" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 30 08:20:39.628: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 57.145754ms Jan 30 08:20:41.687: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11600365s Jan 30 08:20:43.687: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116004471s Jan 30 08:20:45.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114895387s Jan 30 08:20:47.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114863313s Jan 30 08:20:49.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114622094s ... skipping 2 lines ... Jan 30 08:20:55.688: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117232581s Jan 30 08:20:57.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115126992s Jan 30 08:20:59.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115394615s Jan 30 08:21:01.688: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116756989s Jan 30 08:21:03.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.114927828s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:21:03.686[0m Jan 30 08:21:03.686: INFO: Pod "azuredisk-volume-tester-f4cnn" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/30/23 08:21:03.686[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/30/23 08:21:08.686[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:21:08.801[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:21:08.801[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:21:08.87[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/30/23 08:21:08.87[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:21:08.929[0m Jan 30 08:21:08.929: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nlgrr" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 30 08:21:08.986: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 56.871845ms Jan 30 08:21:11.045: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115235391s Jan 30 08:21:13.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116908687s Jan 30 08:21:15.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116631679s Jan 30 08:21:17.051: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121723679s Jan 30 08:21:19.044: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114506481s ... skipping 10 lines ... Jan 30 08:21:41.045: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 32.116202759s Jan 30 08:21:43.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 34.116463471s Jan 30 08:21:45.045: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115995341s Jan 30 08:21:47.051: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 38.12164959s Jan 30 08:21:49.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.116639344s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:21:49.046[0m Jan 30 08:21:49.046: INFO: Pod "azuredisk-volume-tester-nlgrr" satisfied condition "Succeeded or Failed" Jan 30 08:21:49.046: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-nlgrr" Jan 30 08:21:49.116: INFO: Pod azuredisk-volume-tester-nlgrr has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-nlgrr in namespace azuredisk-2546 [38;5;243m01/30/23 08:21:49.116[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:21:49.238[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:21:49.295[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:20:39.391[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:20:39.391[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:20:39.45[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:20:39.451[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:20:39.511[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:20:39.511[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:20:39.571[0m Jan 30 08:20:39.571: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-f4cnn" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 30 08:20:39.628: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 57.145754ms Jan 30 08:20:41.687: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11600365s Jan 30 08:20:43.687: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116004471s Jan 30 08:20:45.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114895387s Jan 30 08:20:47.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114863313s Jan 30 08:20:49.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114622094s ... skipping 2 lines ... Jan 30 08:20:55.688: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117232581s Jan 30 08:20:57.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115126992s Jan 30 08:20:59.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115394615s Jan 30 08:21:01.688: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116756989s Jan 30 08:21:03.686: INFO: Pod "azuredisk-volume-tester-f4cnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.114927828s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:21:03.686[0m Jan 30 08:21:03.686: INFO: Pod "azuredisk-volume-tester-f4cnn" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/30/23 08:21:03.686[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/30/23 08:21:08.686[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:21:08.801[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:21:08.801[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:21:08.87[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/30/23 08:21:08.87[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:21:08.929[0m Jan 30 08:21:08.929: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nlgrr" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 30 08:21:08.986: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 56.871845ms Jan 30 08:21:11.045: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115235391s Jan 30 08:21:13.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116908687s Jan 30 08:21:15.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116631679s Jan 30 08:21:17.051: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121723679s Jan 30 08:21:19.044: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114506481s ... skipping 10 lines ... Jan 30 08:21:41.045: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 32.116202759s Jan 30 08:21:43.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 34.116463471s Jan 30 08:21:45.045: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115995341s Jan 30 08:21:47.051: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Pending", Reason="", readiness=false. Elapsed: 38.12164959s Jan 30 08:21:49.046: INFO: Pod "azuredisk-volume-tester-nlgrr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.116639344s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:21:49.046[0m Jan 30 08:21:49.046: INFO: Pod "azuredisk-volume-tester-nlgrr" satisfied condition "Succeeded or Failed" Jan 30 08:21:49.046: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-nlgrr" Jan 30 08:21:49.116: INFO: Pod azuredisk-volume-tester-nlgrr has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-nlgrr in namespace azuredisk-2546 [38;5;243m01/30/23 08:21:49.116[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:21:49.238[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:21:49.295[0m ... skipping 56 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:22:41.769[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:22:41.769[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:22:41.827[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:22:41.827[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:22:41.886[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:22:41.887[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:22:41.949[0m Jan 30 08:22:41.949: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-tjxkj" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 30 08:22:42.012: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 62.579655ms Jan 30 08:22:44.069: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119969063s Jan 30 08:22:46.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120589805s Jan 30 08:22:48.069: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120010844s Jan 30 08:22:50.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121037565s Jan 30 08:22:52.076: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126346833s ... skipping 9 lines ... Jan 30 08:23:12.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.120665535s Jan 30 08:23:14.073: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 32.123985778s Jan 30 08:23:16.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.12090046s Jan 30 08:23:18.071: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 36.121645381s Jan 30 08:23:20.071: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.121355586s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:23:20.071[0m Jan 30 08:23:20.071: INFO: Pod "azuredisk-volume-tester-tjxkj" satisfied condition "Succeeded or Failed" Jan 30 08:23:20.071: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-tjxkj" Jan 30 08:23:20.131: INFO: Pod azuredisk-volume-tester-tjxkj has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-tjxkj in namespace azuredisk-1598 [38;5;243m01/30/23 08:23:20.131[0m ... skipping 75 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:22:41.769[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:22:41.769[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:22:41.827[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:22:41.827[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:22:41.886[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:22:41.887[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:22:41.949[0m Jan 30 08:22:41.949: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-tjxkj" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 30 08:22:42.012: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 62.579655ms Jan 30 08:22:44.069: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119969063s Jan 30 08:22:46.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120589805s Jan 30 08:22:48.069: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120010844s Jan 30 08:22:50.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121037565s Jan 30 08:22:52.076: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126346833s ... skipping 9 lines ... Jan 30 08:23:12.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.120665535s Jan 30 08:23:14.073: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 32.123985778s Jan 30 08:23:16.070: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.12090046s Jan 30 08:23:18.071: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 36.121645381s Jan 30 08:23:20.071: INFO: Pod "azuredisk-volume-tester-tjxkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.121355586s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:23:20.071[0m Jan 30 08:23:20.071: INFO: Pod "azuredisk-volume-tester-tjxkj" satisfied condition "Succeeded or Failed" Jan 30 08:23:20.071: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-tjxkj" Jan 30 08:23:20.131: INFO: Pod azuredisk-volume-tester-tjxkj has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-tjxkj in namespace azuredisk-1598 [38;5;243m01/30/23 08:23:20.131[0m ... skipping 69 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:24:53.431[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:24:53.431[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:24:53.489[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:24:53.489[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:24:53.548[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:24:53.548[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:24:53.607[0m Jan 30 08:24:53.607: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-c95mw" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 30 08:24:53.665: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 57.16365ms Jan 30 08:24:55.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115683124s Jan 30 08:24:57.725: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117958375s Jan 30 08:24:59.725: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117146839s Jan 30 08:25:01.727: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119922578s Jan 30 08:25:03.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115431294s ... skipping 10 lines ... Jan 30 08:25:25.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115684555s Jan 30 08:25:27.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.115274128s Jan 30 08:25:29.724: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.116226436s Jan 30 08:25:31.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 38.115317764s Jan 30 08:25:33.726: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.118307703s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:25:33.726[0m Jan 30 08:25:33.726: INFO: Pod "azuredisk-volume-tester-c95mw" satisfied condition "Succeeded or Failed" Jan 30 08:25:33.726: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-c95mw" Jan 30 08:25:33.826: INFO: Pod azuredisk-volume-tester-c95mw has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.072433 seconds, 1.3GB/s hello world ... skipping 59 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:24:53.431[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:24:53.431[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:24:53.489[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:24:53.489[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:24:53.548[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:24:53.548[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:24:53.607[0m Jan 30 08:24:53.607: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-c95mw" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 30 08:24:53.665: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 57.16365ms Jan 30 08:24:55.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115683124s Jan 30 08:24:57.725: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117958375s Jan 30 08:24:59.725: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117146839s Jan 30 08:25:01.727: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119922578s Jan 30 08:25:03.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115431294s ... skipping 10 lines ... Jan 30 08:25:25.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115684555s Jan 30 08:25:27.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.115274128s Jan 30 08:25:29.724: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.116226436s Jan 30 08:25:31.723: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Pending", Reason="", readiness=false. Elapsed: 38.115317764s Jan 30 08:25:33.726: INFO: Pod "azuredisk-volume-tester-c95mw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.118307703s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:25:33.726[0m Jan 30 08:25:33.726: INFO: Pod "azuredisk-volume-tester-c95mw" satisfied condition "Succeeded or Failed" Jan 30 08:25:33.726: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-c95mw" Jan 30 08:25:33.826: INFO: Pod azuredisk-volume-tester-c95mw has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.072433 seconds, 1.3GB/s hello world ... skipping 52 lines ... Jan 30 08:26:56.544: INFO: >>> kubeConfig: /root/tmp2890212374/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:26:56.546[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:26:56.546[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:26:56.605[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:26:56.605[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:26:56.669[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:26:56.73[0m Jan 30 08:26:56.730: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xpc5q" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 30 08:26:56.788: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 57.904082ms Jan 30 08:26:58.847: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116483849s Jan 30 08:27:00.847: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116073964s Jan 30 08:27:02.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117840348s Jan 30 08:27:04.846: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116008024s Jan 30 08:27:06.851: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120452209s ... skipping 2 lines ... Jan 30 08:27:12.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117417751s Jan 30 08:27:14.846: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115617295s Jan 30 08:27:16.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117705352s Jan 30 08:27:18.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117375248s Jan 30 08:27:20.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117458269s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:27:20.848[0m Jan 30 08:27:20.848: INFO: Pod "azuredisk-volume-tester-xpc5q" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/30/23 08:27:20.848[0m 2023/01/30 08:27:20 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/30 08:27:20 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-z5czzjqr [38;5;243m01/30/23 08:27:20.849[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-ea299146-a077-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:27:20.849[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-ea299146-a077-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:27:21.853[0m ... skipping 5 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:27:37.038[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:27:37.038[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:27:37.098[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:27:37.099[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:27:37.161[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/30/23 08:27:37.161[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:27:37.22[0m Jan 30 08:27:37.220: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hp6zq" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 30 08:27:37.279: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 59.222797ms Jan 30 08:27:39.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118746893s Jan 30 08:27:41.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118174812s Jan 30 08:27:43.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118512214s Jan 30 08:27:45.340: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11983033s Jan 30 08:27:47.351: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13098118s Jan 30 08:27:49.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119127114s Jan 30 08:27:51.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11774748s Jan 30 08:27:53.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118648157s Jan 30 08:27:55.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118706689s Jan 30 08:27:57.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117679101s Jan 30 08:27:59.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.117726243s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:27:59.338[0m Jan 30 08:27:59.338: INFO: Pod "azuredisk-volume-tester-hp6zq" satisfied condition "Succeeded or Failed" Jan 30 08:27:59.338: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-hp6zq" Jan 30 08:27:59.435: INFO: Pod azuredisk-volume-tester-hp6zq has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hp6zq in namespace azuredisk-8582 [38;5;243m01/30/23 08:27:59.436[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:27:59.556[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:27:59.613[0m ... skipping 54 lines ... Jan 30 08:26:56.544: INFO: >>> kubeConfig: /root/tmp2890212374/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:26:56.546[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:26:56.546[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:26:56.605[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:26:56.605[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:26:56.669[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:26:56.73[0m Jan 30 08:26:56.730: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xpc5q" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 30 08:26:56.788: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 57.904082ms Jan 30 08:26:58.847: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116483849s Jan 30 08:27:00.847: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116073964s Jan 30 08:27:02.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117840348s Jan 30 08:27:04.846: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116008024s Jan 30 08:27:06.851: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120452209s ... skipping 2 lines ... Jan 30 08:27:12.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117417751s Jan 30 08:27:14.846: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115617295s Jan 30 08:27:16.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117705352s Jan 30 08:27:18.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117375248s Jan 30 08:27:20.848: INFO: Pod "azuredisk-volume-tester-xpc5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117458269s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:27:20.848[0m Jan 30 08:27:20.848: INFO: Pod "azuredisk-volume-tester-xpc5q" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/30/23 08:27:20.848[0m [1mSTEP:[0m Prow test resource group: kubetest-z5czzjqr [38;5;243m01/30/23 08:27:20.849[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-ea299146-a077-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:27:20.849[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-ea299146-a077-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:27:21.853[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/30/23 08:27:21.854[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/30/23 08:27:21.854[0m ... skipping 3 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:27:37.038[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:27:37.038[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:27:37.098[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:27:37.099[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:27:37.161[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/30/23 08:27:37.161[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:27:37.22[0m Jan 30 08:27:37.220: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hp6zq" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 30 08:27:37.279: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 59.222797ms Jan 30 08:27:39.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118746893s Jan 30 08:27:41.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118174812s Jan 30 08:27:43.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118512214s Jan 30 08:27:45.340: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11983033s Jan 30 08:27:47.351: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13098118s Jan 30 08:27:49.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119127114s Jan 30 08:27:51.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11774748s Jan 30 08:27:53.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118648157s Jan 30 08:27:55.339: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118706689s Jan 30 08:27:57.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117679101s Jan 30 08:27:59.338: INFO: Pod "azuredisk-volume-tester-hp6zq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.117726243s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:27:59.338[0m Jan 30 08:27:59.338: INFO: Pod "azuredisk-volume-tester-hp6zq" satisfied condition "Succeeded or Failed" Jan 30 08:27:59.338: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-hp6zq" Jan 30 08:27:59.435: INFO: Pod azuredisk-volume-tester-hp6zq has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hp6zq in namespace azuredisk-8582 [38;5;243m01/30/23 08:27:59.436[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:27:59.556[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:27:59.613[0m ... skipping 53 lines ... Jan 30 08:30:29.376: INFO: >>> kubeConfig: /root/tmp2890212374/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:30:29.377[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:30:29.377[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:30:29.437[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:30:29.437[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:30:29.507[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:30:29.566[0m Jan 30 08:30:29.566: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5kdft" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 30 08:30:29.625: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 58.348891ms Jan 30 08:30:31.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117425518s Jan 30 08:30:33.683: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11685269s Jan 30 08:30:35.683: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117258144s Jan 30 08:30:37.686: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11997358s Jan 30 08:30:39.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118000645s ... skipping 2 lines ... Jan 30 08:30:45.685: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 16.1183204s Jan 30 08:30:47.685: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118352585s Jan 30 08:30:49.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118082676s Jan 30 08:30:51.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118144299s Jan 30 08:30:53.686: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120179766s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:30:53.686[0m Jan 30 08:30:53.687: INFO: Pod "azuredisk-volume-tester-5kdft" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/30/23 08:30:53.687[0m 2023/01/30 08:30:53 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/30 08:30:53 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-z5czzjqr [38;5;243m01/30/23 08:30:53.687[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-690618cf-a078-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:30:53.688[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-690618cf-a078-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:30:54.676[0m ... skipping 12 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:31:12.051[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:31:12.109[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:31:12.109[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:31:12.174[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/30/23 08:31:12.174[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/30/23 08:31:12.174[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:31:12.233[0m Jan 30 08:31:12.234: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7gtvr" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 30 08:31:12.291: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 57.163613ms Jan 30 08:31:14.350: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116440256s Jan 30 08:31:16.352: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117861964s Jan 30 08:31:18.353: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119502358s Jan 30 08:31:20.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11773266s Jan 30 08:31:22.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117301336s Jan 30 08:31:24.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117286286s Jan 30 08:31:26.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117739772s Jan 30 08:31:28.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117782048s Jan 30 08:31:30.352: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118720801s Jan 30 08:31:32.350: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116160966s Jan 30 08:31:34.350: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11594318s Jan 30 08:31:36.349: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Failed", Reason="", readiness=false. Elapsed: 24.115098165s Jan 30 08:31:36.349: INFO: Unexpected error: <*fmt.wrapError | 0xc000ef7140>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-7gtvr to be Succeeded or Failed: pod \"azuredisk-volume-tester-7gtvr\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc0004fee80>{ s: "pod \"azuredisk-volume-tester-7gtvr\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 30 08:31:36.349: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-7gtvr to be Succeeded or Failed: pod "azuredisk-volume-tester-7gtvr" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc0009fbd78, {0x270dda0, 0xc000aeeb60}, {0x26f8fa0, 0xc0001b3c20}, 0xc000cfd8c0?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 38 lines ... Jan 30 08:33:24.435: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7726 to be removed Jan 30 08:33:24.493: INFO: Claim "azuredisk-7726" in namespace "pvc-8s4gd" doesn't exist in the system Jan 30 08:33:24.493: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-fbzbl [1mSTEP:[0m dump namespace information after failure [38;5;243m01/30/23 08:33:24.554[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/30/23 08:33:24.554[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [176.167 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:747[0m ... skipping 7 lines ... Jan 30 08:30:29.376: INFO: >>> kubeConfig: /root/tmp2890212374/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:30:29.377[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:30:29.377[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:30:29.437[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:30:29.437[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:30:29.507[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:30:29.566[0m Jan 30 08:30:29.566: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5kdft" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 30 08:30:29.625: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 58.348891ms Jan 30 08:30:31.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117425518s Jan 30 08:30:33.683: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11685269s Jan 30 08:30:35.683: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117258144s Jan 30 08:30:37.686: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11997358s Jan 30 08:30:39.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118000645s ... skipping 2 lines ... Jan 30 08:30:45.685: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 16.1183204s Jan 30 08:30:47.685: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118352585s Jan 30 08:30:49.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118082676s Jan 30 08:30:51.684: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118144299s Jan 30 08:30:53.686: INFO: Pod "azuredisk-volume-tester-5kdft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120179766s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:30:53.686[0m Jan 30 08:30:53.687: INFO: Pod "azuredisk-volume-tester-5kdft" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/30/23 08:30:53.687[0m [1mSTEP:[0m Prow test resource group: kubetest-z5czzjqr [38;5;243m01/30/23 08:30:53.687[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-690618cf-a078-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:30:53.688[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-690618cf-a078-11ed-822b-967d0a096fd9 [38;5;243m01/30/23 08:30:54.676[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/30/23 08:30:54.676[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/30/23 08:30:54.676[0m ... skipping 10 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:31:12.051[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:31:12.109[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:31:12.109[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:31:12.174[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/30/23 08:31:12.174[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/30/23 08:31:12.174[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:31:12.233[0m Jan 30 08:31:12.234: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7gtvr" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 30 08:31:12.291: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 57.163613ms Jan 30 08:31:14.350: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116440256s Jan 30 08:31:16.352: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117861964s Jan 30 08:31:18.353: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119502358s Jan 30 08:31:20.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11773266s Jan 30 08:31:22.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117301336s Jan 30 08:31:24.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117286286s Jan 30 08:31:26.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117739772s Jan 30 08:31:28.351: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117782048s Jan 30 08:31:30.352: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 18.118720801s Jan 30 08:31:32.350: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116160966s Jan 30 08:31:34.350: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11594318s Jan 30 08:31:36.349: INFO: Pod "azuredisk-volume-tester-7gtvr": Phase="Failed", Reason="", readiness=false. Elapsed: 24.115098165s Jan 30 08:31:36.349: INFO: Unexpected error: <*fmt.wrapError | 0xc000ef7140>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-7gtvr to be Succeeded or Failed: pod \"azuredisk-volume-tester-7gtvr\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc0004fee80>{ s: "pod \"azuredisk-volume-tester-7gtvr\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 30 08:31:36.349: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-7gtvr to be Succeeded or Failed: pod "azuredisk-volume-tester-7gtvr" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc0009fbd78, {0x270dda0, 0xc000aeeb60}, {0x26f8fa0, 0xc0001b3c20}, 0xc000cfd8c0?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 39 lines ... Jan 30 08:33:24.493: INFO: Claim "azuredisk-7726" in namespace "pvc-8s4gd" doesn't exist in the system Jan 30 08:33:24.493: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-fbzbl [1mSTEP:[0m dump namespace information after failure [38;5;243m01/30/23 08:33:24.554[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/30/23 08:33:24.554[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 30 08:31:36.349: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-7gtvr to be Succeeded or Failed: pod "azuredisk-volume-tester-7gtvr" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 08:31:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP:10.248.0.11 PodIPs:[{IP:10.248.0.11}] StartTime:2023-01-30 08:31:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 08:31:33 +0000 UTC,FinishedAt:2023-01-30 08:31:33 +0000 UTC,ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://4723402653db86f223f45666bca2eb5dc64bbece1a21783a9d6c28c54e132c44 Started:0xc0005b486f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002763c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 25 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:33:25.708[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:33:25.768[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:33:25.768[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:33:25.827[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:33:25.828[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:33:25.887[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:33:25.949[0m Jan 30 08:33:25.949: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rz7zk" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 30 08:33:26.008: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 59.012676ms Jan 30 08:33:28.067: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11790211s Jan 30 08:33:30.072: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122538385s Jan 30 08:33:32.070: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121333041s Jan 30 08:33:34.067: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11846654s Jan 30 08:33:36.072: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123110663s ... skipping 10 lines ... Jan 30 08:33:58.069: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 32.119597789s Jan 30 08:34:00.068: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.11934355s Jan 30 08:34:02.068: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119035065s Jan 30 08:34:04.069: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 38.120193653s Jan 30 08:34:06.070: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.120906601s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:34:06.07[0m Jan 30 08:34:06.070: INFO: Pod "azuredisk-volume-tester-rz7zk" satisfied condition "Succeeded or Failed" Jan 30 08:34:06.070: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-rz7zk" Jan 30 08:34:06.132: INFO: Pod azuredisk-volume-tester-rz7zk has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-rz7zk in namespace azuredisk-3086 [38;5;243m01/30/23 08:34:06.132[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:34:06.265[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:34:06.323[0m ... skipping 70 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:33:25.708[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:33:25.768[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:33:25.768[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:33:25.827[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:33:25.828[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:33:25.887[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:33:25.949[0m Jan 30 08:33:25.949: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rz7zk" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 30 08:33:26.008: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 59.012676ms Jan 30 08:33:28.067: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11790211s Jan 30 08:33:30.072: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122538385s Jan 30 08:33:32.070: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121333041s Jan 30 08:33:34.067: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11846654s Jan 30 08:33:36.072: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123110663s ... skipping 10 lines ... Jan 30 08:33:58.069: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 32.119597789s Jan 30 08:34:00.068: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.11934355s Jan 30 08:34:02.068: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119035065s Jan 30 08:34:04.069: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 38.120193653s Jan 30 08:34:06.070: INFO: Pod "azuredisk-volume-tester-rz7zk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.120906601s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:34:06.07[0m Jan 30 08:34:06.070: INFO: Pod "azuredisk-volume-tester-rz7zk" satisfied condition "Succeeded or Failed" Jan 30 08:34:06.070: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-rz7zk" Jan 30 08:34:06.132: INFO: Pod azuredisk-volume-tester-rz7zk has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-rz7zk in namespace azuredisk-3086 [38;5;243m01/30/23 08:34:06.132[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:34:06.265[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:34:06.323[0m ... skipping 1036 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:49:32.599[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:49:32.599[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:49:32.669[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:49:32.669[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:49:32.74[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:49:32.74[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:49:32.811[0m Jan 30 08:49:32.811: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pps8k" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 30 08:49:32.879: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 67.703528ms Jan 30 08:49:34.948: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137627302s Jan 30 08:49:36.949: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137802021s Jan 30 08:49:38.947: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135649669s Jan 30 08:49:40.947: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135803655s Jan 30 08:49:42.950: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139399912s ... skipping 2 lines ... Jan 30 08:49:48.949: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 16.138203051s Jan 30 08:49:50.948: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 18.136975312s Jan 30 08:49:52.948: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 20.137184832s Jan 30 08:49:54.947: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 22.13593724s Jan 30 08:49:56.949: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.138123263s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:49:56.949[0m Jan 30 08:49:56.949: INFO: Pod "azuredisk-volume-tester-pps8k" satisfied condition "Succeeded or Failed" Jan 30 08:49:56.949: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-pps8k" Jan 30 08:49:57.052: INFO: Pod azuredisk-volume-tester-pps8k has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pps8k in namespace azuredisk-1092 [38;5;243m01/30/23 08:49:57.052[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:49:57.207[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:49:57.275[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/30/23 08:49:32.599[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/30/23 08:49:32.599[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/30/23 08:49:32.669[0m [1mSTEP:[0m creating a PVC [38;5;243m01/30/23 08:49:32.669[0m [1mSTEP:[0m setting up the pod [38;5;243m01/30/23 08:49:32.74[0m [1mSTEP:[0m deploying the pod [38;5;243m01/30/23 08:49:32.74[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/30/23 08:49:32.811[0m Jan 30 08:49:32.811: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pps8k" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 30 08:49:32.879: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 67.703528ms Jan 30 08:49:34.948: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137627302s Jan 30 08:49:36.949: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137802021s Jan 30 08:49:38.947: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135649669s Jan 30 08:49:40.947: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135803655s Jan 30 08:49:42.950: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139399912s ... skipping 2 lines ... Jan 30 08:49:48.949: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 16.138203051s Jan 30 08:49:50.948: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 18.136975312s Jan 30 08:49:52.948: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 20.137184832s Jan 30 08:49:54.947: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Pending", Reason="", readiness=false. Elapsed: 22.13593724s Jan 30 08:49:56.949: INFO: Pod "azuredisk-volume-tester-pps8k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.138123263s [1mSTEP:[0m Saw pod success [38;5;243m01/30/23 08:49:56.949[0m Jan 30 08:49:56.949: INFO: Pod "azuredisk-volume-tester-pps8k" satisfied condition "Succeeded or Failed" Jan 30 08:49:56.949: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-pps8k" Jan 30 08:49:57.052: INFO: Pod azuredisk-volume-tester-pps8k has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pps8k in namespace azuredisk-1092 [38;5;243m01/30/23 08:49:57.052[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/30/23 08:49:57.207[0m [1mSTEP:[0m checking the PV [38;5;243m01/30/23 08:49:57.275[0m ... skipping 93 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0130 07:58:05.217189 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 e2e-test I0130 07:58:05.217837 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 07:58:05.247570 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 07:58:05.247602 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 07:58:05.247612 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 07:58:05.247651 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 07:58:05.248523 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 07:58:05.248624 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 07:58:05.248713 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0130 07:58:05.249281 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0130 07:58:05.249312 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0130 07:58:05.249320 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0130 07:58:05.249408 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0130 07:58:05.249442 1 azuredisk.go:193] disable UseInstanceMetadata for controller I0130 07:58:05.249459 1 azuredisk.go:205] cloud: AzurePublicCloud, location: westus2, rg: kubetest-z5czzjqr, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-25433637-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0130 07:58:05.254107 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount2480226467' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount2480226467: must be superuser to unmount. I0130 07:58:05.254133 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0130 07:58:05.254284 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0130 07:58:05.254299 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0130 07:58:05.254656 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0130 07:58:05.254670 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0130 07:58:05.254678 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 68 lines ... I0130 07:58:15.032463 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0130 07:58:15.121655 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 32358 I0130 07:58:15.125901 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-b753cdf4-37f7-4106-81b4-650ea55ef392. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 07:58:15.125933 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 to node k8s-agentpool-25433637-vmss000001 I0130 07:58:15.125990 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392:%!s(*provider.AttachDiskOptions=&{None pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 false 0})] I0130 07:58:15.126023 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392:%!s(*provider.AttachDiskOptions=&{None pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 false 0})]) I0130 07:58:15.998145 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392:%!s(*provider.AttachDiskOptions=&{None pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 07:58:26.119526 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 07:58:26.119581 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 07:58:26.119643 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 attached to node k8s-agentpool-25433637-vmss000001. I0130 07:58:26.119666 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392 to node k8s-agentpool-25433637-vmss000001 successfully I0130 07:58:26.119762 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.244507955 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 07:58:26.119789 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0130 07:59:20.456268 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392) returned with <nil> I0130 07:59:20.456315 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.232176877 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b753cdf4-37f7-4106-81b4-650ea55ef392" result_code="succeeded" I0130 07:59:20.456337 1 utils.go:84] GRPC response: {} I0130 07:59:25.958995 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0130 07:59:25.959028 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-8c023d00-f175-4dca-b41d-969018f75a31","parameters":{"csi.storage.k8s.io/pv/name":"pvc-8c023d00-f175-4dca-b41d-969018f75a31","csi.storage.k8s.io/pvc/name":"pvc-f2t2l","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","networkAccessPolicy":"DenyAll","skuName":"Standard_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0130 07:59:25.959833 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 07:59:25.963032 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 07:59:25.963062 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 07:59:25.963068 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 07:59:25.963089 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 07:59:25.963447 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 07:59:25.963484 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 07:59:25.963496 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0130 07:59:30.426758 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8c023d00-f175-4dca-b41d-969018f75a31","csi.storage.k8s.io/pvc/name":"pvc-f2t2l","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","enableasyncattach":"false","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31"} I0130 07:59:30.468011 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1218 I0130 07:59:30.468362 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8c023d00-f175-4dca-b41d-969018f75a31. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 07:59:30.468392 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31 to node k8s-agentpool-25433637-vmss000001 I0130 07:59:30.468429 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8c023d00-f175-4dca-b41d-969018f75a31 false 0})] I0130 07:59:30.468495 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8c023d00-f175-4dca-b41d-969018f75a31 false 0})]) I0130 07:59:30.685877 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8c023d00-f175-4dca-b41d-969018f75a31 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 07:59:40.781198 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 07:59:40.781267 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 07:59:40.781294 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31 attached to node k8s-agentpool-25433637-vmss000001. I0130 07:59:40.781311 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31 to node k8s-agentpool-25433637-vmss000001 successfully I0130 07:59:40.781614 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.313016852 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 07:59:40.781662 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 39 lines ... I0130 08:00:26.889663 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31) returned with <nil> I0130 08:00:26.889955 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.196114247 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-8c023d00-f175-4dca-b41d-969018f75a31" result_code="succeeded" I0130 08:00:26.890133 1 utils.go:84] GRPC response: {} I0130 08:00:32.463427 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0130 08:00:32.463660 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":1099511627776},"name":"pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987","csi.storage.k8s.io/pvc/name":"pvc-299kn","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","perfProfile":"Basic","skuName":"Premium_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0130 08:00:32.464459 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 08:00:32.474925 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 08:00:32.474953 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 08:00:32.474964 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 08:00:32.474997 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 08:00:32.475475 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 08:00:32.475548 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 08:00:32.475568 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0130 08:00:35.517377 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987","csi.storage.k8s.io/pvc/name":"pvc-299kn","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","enableasyncattach":"false","perfProfile":"Basic","requestedsizegib":"1024","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987"} I0130 08:00:35.628697 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1338 I0130 08:00:35.629214 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:00:35.629256 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 to node k8s-agentpool-25433637-vmss000001 I0130 08:00:35.629293 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 false 0})] I0130 08:00:35.629431 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 false 0})]) I0130 08:00:35.826698 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:00:46.039385 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:00:46.039514 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:00:46.039542 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:00:46.039560 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:00:46.039610 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.410391397 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:00:46.039640 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0130 08:01:41.851143 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.258145581 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:01:41.851221 1 utils.go:84] GRPC response: {} I0130 08:01:53.927714 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:01:53.927753 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987"} I0130 08:01:53.927854 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987) I0130 08:01:53.970157 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1338 I0130 08:02:23.927683 1 azure_armclient.go:547] Received error in delete.wait: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987, error: %!s(<nil>) I0130 08:02:23.927779 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 0, RawError: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded I0130 08:02:23.927878 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=29.99996167 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987" result_code="failed_csi_driver_controller_delete_volume" E0130 08:02:23.927908 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 0, RawError: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded I0130 08:02:55.928765 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:02:55.928792 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987"} I0130 08:02:55.928870 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987) I0130 08:02:55.947387 1 util.go:124] Send.sendRequest got response with ContentLength 253, StatusCode 404 and responseBody length 253 I0130 08:02:55.947683 1 azure_diskclient.go:139] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987' under resource group 'kubetest-z5czzjqr' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}} I0130 08:02:55.948015 1 azure_managedDiskController.go:299] azureDisk - disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987) is already deleted I0130 08:02:55.948041 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987) returned with <nil> I0130 08:02:55.948084 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.019189405 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f825adb8-a285-4f57-bb5a-7680ca6ed987" result_code="succeeded" I0130 08:02:55.948112 1 utils.go:84] GRPC response: {} I0130 08:03:00.544310 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0130 08:03:00.544360 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-11934eb2-6d55-4056-a6ad-52e90632a93d","parameters":{"csi.storage.k8s.io/pv/name":"pvc-11934eb2-6d55-4056-a6ad-52e90632a93d","csi.storage.k8s.io/pvc/name":"pvc-69g2b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","skuName":"StandardSSD_ZRS"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}}]} ... skipping 9 lines ... I0130 08:03:03.615260 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-11934eb2-6d55-4056-a6ad-52e90632a93d","csi.storage.k8s.io/pvc/name":"pvc-69g2b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d"} I0130 08:03:03.641589 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:03:03.642163 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-11934eb2-6d55-4056-a6ad-52e90632a93d. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:03:03.642203 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d to node k8s-agentpool-25433637-vmss000001 I0130 08:03:03.642281 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-11934eb2-6d55-4056-a6ad-52e90632a93d false 0})] I0130 08:03:03.642427 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-11934eb2-6d55-4056-a6ad-52e90632a93d false 0})]) I0130 08:03:03.866390 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-11934eb2-6d55-4056-a6ad-52e90632a93d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:03:13.998149 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:03:13.998191 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:03:13.998215 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d attached to node k8s-agentpool-25433637-vmss000001. I0130 08:03:13.998232 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:03:13.998281 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.35612792 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:03:13.998303 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 40 lines ... I0130 08:04:13.085762 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef","csi.storage.k8s.io/pvc/name":"pvc-fs7kd","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef"} I0130 08:04:13.133468 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:04:13.134032 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:04:13.134112 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef to node k8s-agentpool-25433637-vmss000001 I0130 08:04:13.134260 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef false 0})] I0130 08:04:13.134335 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef false 0})]) I0130 08:04:13.278462 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:04:23.408489 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:04:23.408540 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:04:23.408568 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef attached to node k8s-agentpool-25433637-vmss000001. I0130 08:04:23.408586 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:04:23.408683 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.27464118 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:04:23.408714 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0130 08:05:21.180794 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6","csi.storage.k8s.io/pvc/name":"pvc-4zhb8","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6"} I0130 08:05:21.260900 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0130 08:05:21.261374 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:05:21.261404 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 to node k8s-agentpool-25433637-vmss000001 I0130 08:05:21.261443 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 false 0})] I0130 08:05:21.261492 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 false 0})]) I0130 08:05:21.490106 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:05:31.614204 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:05:31.614245 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:05:31.614270 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:05:31.614287 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:05:31.614335 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.352950902 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:05:31.614359 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 24 lines ... I0130 08:05:52.303282 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 from node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6] E0130 08:05:52.303416 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-25433637-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6]) not found I0130 08:05:52.303473 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6]) I0130 08:05:57.494266 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:05:57.494533 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6"} I0130 08:05:57.494687 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6) I0130 08:05:57.494800 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6) since it's in attaching or detaching state I0130 08:05:57.495019 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.000291101 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6" result_code="failed_csi_driver_controller_delete_volume" E0130 08:05:57.495047 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6) since it's in attaching or detaching state I0130 08:05:57.498759 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6:pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6]) returned with <nil> I0130 08:05:57.499040 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:05:57.499077 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:05:57.499092 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6) succeeded I0130 08:05:57.499105 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6 from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:05:57.499412 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.196141545 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" ... skipping 35 lines ... I0130 08:06:44.970598 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 to node k8s-agentpool-25433637-vmss000001 I0130 08:06:44.970661 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 false 0})] I0130 08:06:44.970764 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 false 0})]) I0130 08:06:44.974126 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0130 08:06:44.974393 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-82710093-2994-4747-8213-ebc5dfa69a10. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-06ca4a5d-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-82710093-2994-4747-8213-ebc5dfa69a10 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:06:44.974461 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-06ca4a5d-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-82710093-2994-4747-8213-ebc5dfa69a10 to node k8s-agentpool-25433637-vmss000001 I0130 08:06:46.010190 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:06:56.142662 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:06:56.142713 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:06:56.142749 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:06:56.142768 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:06:56.142872 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.172254517 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:06:56.142904 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0130 08:06:56.199071 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1466 I0130 08:06:56.199634 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 I0130 08:06:56.199659 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:06:56.199676 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848 is already attached to node k8s-agentpool-25433637-vmss000001 at lun 0. I0130 08:06:56.199725 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=8.35e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-074ee5b0-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-46c8c948-dc76-4548-9e43-dc1141bf9848" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:06:56.199745 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0130 08:06:56.369441 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-06ca4a5d-a075-11ed-822b-967d0a096fd9/providers/microsoft.compute/disks/pvc-82710093-2994-4747-8213-ebc5dfa69a10:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-82710093-2994-4747-8213-ebc5dfa69a10 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:07:06.492592 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:07:06.492641 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:07:06.492665 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-06ca4a5d-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-82710093-2994-4747-8213-ebc5dfa69a10 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:07:06.492946 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-06ca4a5d-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-82710093-2994-4747-8213-ebc5dfa69a10 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:07:06.493013 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.518602077 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-06ca4a5d-a075-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-82710093-2994-4747-8213-ebc5dfa69a10" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:07:06.493041 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 67 lines ... I0130 08:08:50.666736 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1207 I0130 08:08:50.725281 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0130 08:08:50.728372 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:08:50.728432 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 to node k8s-agentpool-25433637-vmss000001 I0130 08:08:50.728476 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 false 0})] I0130 08:08:50.728530 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 false 0})]) I0130 08:08:50.930197 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:09:26.257469 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:09:26.258674 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:09:26.258711 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:09:26.258761 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:09:26.258858 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.591760241 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f6078c8c-707e-47da-b4ee-7fd3012f5693" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:09:26.258879 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 40 lines ... I0130 08:10:44.999896 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506","csi.storage.k8s.io/pvc/name":"pvc-tqtc2","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506"} I0130 08:10:45.023575 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:10:45.024036 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:10:45.024070 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 to node k8s-agentpool-25433637-vmss000001 I0130 08:10:45.026831 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 false 0})] I0130 08:10:45.027052 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 false 0})]) I0130 08:10:45.170632 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:10:55.256580 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:10:55.256618 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:10:55.256642 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:10:55.256674 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:10:55.256716 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.232678795 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:10:55.256738 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0130 08:11:09.259514 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c","csi.storage.k8s.io/pvc/name":"pvc-4ntvg","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c"} I0130 08:11:09.283390 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:11:09.283857 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c to node k8s-agentpool-25433637-vmss000000 (vmState Succeeded). I0130 08:11:09.283889 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c to node k8s-agentpool-25433637-vmss000000 I0130 08:11:09.284000 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c lun 0 to node k8s-agentpool-25433637-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c false 0})] I0130 08:11:09.284045 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c false 0})]) I0130 08:11:09.454336 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:11:19.555637 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000000) successfully I0130 08:11:19.555677 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000000) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:11:19.555716 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c attached to node k8s-agentpool-25433637-vmss000000. I0130 08:11:19.555731 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c to node k8s-agentpool-25433637-vmss000000 successfully I0130 08:11:19.556059 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.271948734 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-ec3ee74d-b9ef-4730-bd55-5b5350e5195c" node="k8s-agentpool-25433637-vmss000000" result_code="succeeded" I0130 08:11:19.556087 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0130 08:11:33.579143 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1","csi.storage.k8s.io/pvc/name":"pvc-448f9","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1"} I0130 08:11:33.602048 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:11:33.602394 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:11:33.602429 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 to node k8s-agentpool-25433637-vmss000001 I0130 08:11:33.602467 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 lun 1 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 false 1})] I0130 08:11:33.602588 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 false 1})]) I0130 08:11:33.790991 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:11:43.895662 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:11:43.895724 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:11:43.895749 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:11:43.895790 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:11:43.896017 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.293439438 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:11:43.896058 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 24 lines ... I0130 08:12:37.588039 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 from node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1] E0130 08:12:37.588094 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-25433637-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1]) not found I0130 08:12:37.588113 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1]) I0130 08:12:41.909923 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:12:41.909971 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1"} I0130 08:12:41.910071 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1) I0130 08:12:41.910105 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1) since it's in attaching or detaching state I0130 08:12:41.910178 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=6.89e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1" result_code="failed_csi_driver_controller_delete_volume" E0130 08:12:41.910196 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1) since it's in attaching or detaching state I0130 08:12:42.904876 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1:pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1]) returned with <nil> I0130 08:12:42.904950 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:12:42.904971 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:12:42.904986 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1) succeeded I0130 08:12:42.904998 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1 from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:12:42.905040 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.31723008 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-76d5fa8d-70df-4c8b-96ac-05eedb2e6be1" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" ... skipping 46 lines ... I0130 08:15:02.733170 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 from node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506] E0130 08:15:02.733222 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-25433637-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506]) not found I0130 08:15:02.733264 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506]) I0130 08:15:03.044919 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:15:03.044952 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506"} I0130 08:15:03.045178 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506) I0130 08:15:03.045391 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506) since it's in attaching or detaching state I0130 08:15:03.045679 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.000463503 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506" result_code="failed_csi_driver_controller_delete_volume" E0130 08:15:03.045706 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506) since it's in attaching or detaching state I0130 08:15:07.953000 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506:pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506]) returned with <nil> I0130 08:15:07.953072 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:15:07.953092 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:15:07.953122 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506) succeeded I0130 08:15:07.953136 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506 from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:15:07.953178 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.220247721 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-7918cbf5-5832-4ebd-9bf4-e9b064f91506" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" ... skipping 21 lines ... I0130 08:15:33.009111 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-z5czzjqr:{}] I0130 08:15:33.036263 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0130 08:15:33.036428 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:15:33.036471 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 to node k8s-agentpool-25433637-vmss000001 I0130 08:15:33.036525 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 false 0})] I0130 08:15:33.036579 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 false 0})]) I0130 08:15:33.278895 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:15:43.370343 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:15:43.370425 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:15:43.370449 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:15:43.370466 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:15:43.370514 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.3613837 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e8f56f47-f1f0-4955-8c95-38dab82bc679" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:15:43.370571 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 57 lines ... I0130 08:18:28.511133 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd","csi.storage.k8s.io/pvc/name":"pvc-7r866","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd"} I0130 08:18:28.535244 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1217 I0130 08:18:28.535560 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:18:28.535613 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd to node k8s-agentpool-25433637-vmss000001 I0130 08:18:28.535658 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd false 0})] I0130 08:18:28.535708 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd false 0})]) I0130 08:18:28.667356 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:18:38.786150 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:18:38.786238 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:18:38.786287 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd attached to node k8s-agentpool-25433637-vmss000001. I0130 08:18:38.786306 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:18:38.786355 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.250787651 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:18:38.786382 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 33 lines ... I0130 08:19:13.070539 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd) succeeded I0130 08:19:13.070563 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:19:13.070616 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.346656476 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-be32cbee-0428-4299-99a2-2d93c55ed8fd" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:19:13.070634 1 utils.go:84] GRPC response: {} I0130 08:19:13.070785 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71 false 0})] I0130 08:19:13.070834 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71 false 0})]) I0130 08:19:13.230592 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:19:23.466237 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:19:23.466282 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:19:23.466306 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:19:23.466323 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:19:23.466371 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.607246499 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a51e9d11-6bdb-49db-a8cc-d984e208af71" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:19:23.466401 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 40 lines ... I0130 08:20:42.619529 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21","csi.storage.k8s.io/pvc/name":"pvc-428k5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21"} I0130 08:20:42.643680 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0130 08:20:42.644144 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:20:42.644182 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 to node k8s-agentpool-25433637-vmss000001 I0130 08:20:42.644221 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 false 0})] I0130 08:20:42.644274 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 false 0})]) I0130 08:20:42.808691 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:20:52.917009 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:20:52.917062 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:20:52.917088 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:20:52.917105 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:20:52.917157 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.273005998 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:20:52.917180 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 33 lines ... I0130 08:21:23.268174 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21) succeeded I0130 08:21:23.268204 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21 from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:21:23.268250 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.339892839000001 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cb64df68-19ac-4728-8b7b-65bfca00fc21" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:21:23.268269 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-36e31c1b-2026-46ed-bab3-470d0113c78d false 0})] I0130 08:21:23.268322 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-36e31c1b-2026-46ed-bab3-470d0113c78d false 0})]) I0130 08:21:23.268266 1 utils.go:84] GRPC response: {} I0130 08:21:23.437718 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-36e31c1b-2026-46ed-bab3-470d0113c78d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:21:33.556794 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:21:33.556843 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:21:33.556867 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d attached to node k8s-agentpool-25433637-vmss000001. I0130 08:21:33.556883 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:21:33.556934 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.553833798 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-36e31c1b-2026-46ed-bab3-470d0113c78d" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:21:33.556963 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 83 lines ... I0130 08:22:45.123462 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:22:45.123672 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:22:45.123702 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 to node k8s-agentpool-25433637-vmss000001 I0130 08:22:45.131084 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:22:45.131464 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:22:45.131543 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb to node k8s-agentpool-25433637-vmss000001 I0130 08:22:45.288691 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-f417e82b-387b-4780-a0ef-f6b63a72c69f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f417e82b-387b-4780-a0ef-f6b63a72c69f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:22:55.543845 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:22:55.544007 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:22:55.544137 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f417e82b-387b-4780-a0ef-f6b63a72c69f attached to node k8s-agentpool-25433637-vmss000001. I0130 08:22:55.544226 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f417e82b-387b-4780-a0ef-f6b63a72c69f to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:22:55.544417 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.422737569 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-f417e82b-387b-4780-a0ef-f6b63a72c69f" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:22:55.544487 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0130 08:22:55.545282 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 lun 1 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 false 1})] I0130 08:22:55.545470 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 false 1})]) I0130 08:22:56.363740 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-564c0a64-3a39-4473-aecd-a99fdd8d02fb false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:23:06.534691 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:23:06.534733 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:23:06.534767 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:23:06.534881 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:23:06.534928 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.411238193 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-9a272d4a-ef9b-479a-9011-cbbb470b34f4" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:23:06.534952 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 124 lines ... I0130 08:24:56.695773 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:24:56.695827 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 to node k8s-agentpool-25433637-vmss000001 I0130 08:24:56.695974 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 false 0})] I0130 08:24:56.696111 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-3020cc46-3ebe-4703-9565-689771626960. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:24:56.696216 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 to node k8s-agentpool-25433637-vmss000001 I0130 08:24:56.696242 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 false 0})]) I0130 08:24:56.941329 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:25:07.051384 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:25:07.051423 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:25:07.051472 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:25:07.051489 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:25:07.051550 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.355765336 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-0f51ea3a-d884-4dca-9d0c-d815f62f45e7" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:25:07.051588 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0130 08:25:07.051749 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 lun 1 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3020cc46-3ebe-4703-9565-689771626960 false 1})] I0130 08:25:07.051791 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3020cc46-3ebe-4703-9565-689771626960 false 1})]) I0130 08:25:07.268168 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3020cc46-3ebe-4703-9565-689771626960 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:25:17.424639 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:25:17.424698 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:25:17.424739 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:25:17.424754 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:25:17.424800 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=20.728774456 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:25:17.424822 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 37 lines ... I0130 08:26:03.988458 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 from node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:pvc-3020cc46-3ebe-4703-9565-689771626960] E0130 08:26:03.988507 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-25433637-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:pvc-3020cc46-3ebe-4703-9565-689771626960]) not found I0130 08:26:03.988521 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:pvc-3020cc46-3ebe-4703-9565-689771626960]) I0130 08:26:05.091662 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:26:05.091696 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960"} I0130 08:26:05.091814 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960) I0130 08:26:05.091854 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960) since it's in attaching or detaching state I0130 08:26:05.091918 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.99e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960" result_code="failed_csi_driver_controller_delete_volume" E0130 08:26:05.091935 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960) since it's in attaching or detaching state I0130 08:26:09.223483 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960:pvc-3020cc46-3ebe-4703-9565-689771626960]) returned with <nil> I0130 08:26:09.223563 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:26:09.223583 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:26:09.223596 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-3020cc46-3ebe-4703-9565-689771626960, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960) succeeded I0130 08:26:09.223607 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960 from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:26:09.223649 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.23525198 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" ... skipping 28 lines ... I0130 08:26:59.828812 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d","csi.storage.k8s.io/pvc/name":"pvc-zj6gs","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d"} I0130 08:26:59.856452 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:26:59.856852 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:26:59.856886 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d to node k8s-agentpool-25433637-vmss000001 I0130 08:26:59.856966 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d false 0})] I0130 08:26:59.857089 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d false 0})]) I0130 08:27:00.018812 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:27:10.160927 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:27:10.160964 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:27:10.160986 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d attached to node k8s-agentpool-25433637-vmss000001. I0130 08:27:10.161002 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:27:10.161065 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.304194661 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:27:10.161094 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 34 lines ... I0130 08:27:43.776202 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d) succeeded I0130 08:27:43.776252 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:27:43.776303 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.255016395 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:27:43.776320 1 utils.go:84] GRPC response: {} I0130 08:27:43.776472 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468 false 0})] I0130 08:27:43.776531 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468 false 0})]) I0130 08:27:43.949081 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:27:54.142193 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:27:54.142233 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:27:54.142256 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:27:54.142291 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:27:54.142334 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=13.85169271 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4a04df1a-8398-4f6a-a35d-7826d8b4b468" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:27:54.142355 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 47 lines ... I0130 08:30:32.641457 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0130 08:30:32.711280 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0130 08:30:32.714560 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-264a2e94-7064-4562-b3c5-4e9448ff9996. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:30:32.714596 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 to node k8s-agentpool-25433637-vmss000001 I0130 08:30:32.714640 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 false 0})] I0130 08:30:32.714689 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 false 0})]) I0130 08:30:32.920339 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:30:43.036728 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:30:43.036778 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:30:43.036816 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:30:43.036874 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:30:43.036979 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.39516639 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-264a2e94-7064-4562-b3c5-4e9448ff9996" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:30:43.037000 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0130 08:31:15.303294 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee","csi.storage.k8s.io/pvc/name":"pvc-4pxk2","csi.storage.k8s.io/pvc/namespace":"azuredisk-7726","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee"} I0130 08:31:15.329920 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0130 08:31:15.330231 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee to node k8s-agentpool-25433637-vmss000000 (vmState Succeeded). I0130 08:31:15.330264 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee to node k8s-agentpool-25433637-vmss000000 I0130 08:31:15.330302 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee lun 0 to node k8s-agentpool-25433637-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee false 0})] I0130 08:31:15.330345 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee false 0})]) I0130 08:31:15.525285 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:31:25.629858 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000000) successfully I0130 08:31:25.629898 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000000) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:31:25.629925 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee attached to node k8s-agentpool-25433637-vmss000000. I0130 08:31:25.629945 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee to node k8s-agentpool-25433637-vmss000000 successfully I0130 08:31:25.629995 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.299757152 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-515f6c79-6299-48fd-937d-77e03ba9d4ee" node="k8s-agentpool-25433637-vmss000000" result_code="succeeded" I0130 08:31:25.630024 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 86 lines ... I0130 08:33:29.097541 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1222 I0130 08:33:29.097726 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:33:29.097754 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 to node k8s-agentpool-25433637-vmss000001 I0130 08:33:29.099390 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0130 08:33:29.099627 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-514c1f56-2900-419c-a072-0e091c2671d6. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-514c1f56-2900-419c-a072-0e091c2671d6 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:33:29.099652 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-514c1f56-2900-419c-a072-0e091c2671d6 to node k8s-agentpool-25433637-vmss000001 I0130 08:33:30.089227 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-56dcf2ee-f697-46ff-a11e-2c4e14612fbe:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-56dcf2ee-f697-46ff-a11e-2c4e14612fbe false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:33:40.228651 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:33:40.228691 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:33:40.228730 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-56dcf2ee-f697-46ff-a11e-2c4e14612fbe attached to node k8s-agentpool-25433637-vmss000001. I0130 08:33:40.228747 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-56dcf2ee-f697-46ff-a11e-2c4e14612fbe to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:33:40.228802 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.131696812 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-56dcf2ee-f697-46ff-a11e-2c4e14612fbe" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:33:40.228864 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0130 08:33:40.229005 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 lun 1 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-514c1f56-2900-419c-a072-0e091c2671d6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-514c1f56-2900-419c-a072-0e091c2671d6 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 false 1})] I0130 08:33:40.229062 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-514c1f56-2900-419c-a072-0e091c2671d6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-514c1f56-2900-419c-a072-0e091c2671d6 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 false 1})]) I0130 08:33:40.396225 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-514c1f56-2900-419c-a072-0e091c2671d6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-514c1f56-2900-419c-a072-0e091c2671d6 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:33:50.537217 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:33:50.537265 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:33:50.537317 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:33:50.537335 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:33:50.537405 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.439639124 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:33:50.537425 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-514c1f56-2900-419c-a072-0e091c2671d6 lun 2 to node k8s-agentpool-25433637-vmss000001, diskMap: map[] ... skipping 43 lines ... I0130 08:34:34.471161 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 from node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4] E0130 08:34:34.471196 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-25433637-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4]) not found I0130 08:34:34.471382 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4]) I0130 08:34:37.393794 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0130 08:34:37.393821 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4"} I0130 08:34:37.393937 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4) I0130 08:34:37.393966 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4) since it's in attaching or detaching state I0130 08:34:37.394023 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=4.7201e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4" result_code="failed_csi_driver_controller_delete_volume" E0130 08:34:37.394038 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4) since it's in attaching or detaching state I0130 08:34:39.661734 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4:pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4]) returned with <nil> I0130 08:34:39.661792 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:34:39.661812 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:34:39.661825 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4) succeeded I0130 08:34:39.661836 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4 from node k8s-agentpool-25433637-vmss000001 successfully I0130 08:34:39.661887 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.190859628 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-55d5cf31-a875-4fee-a4c0-cbd51afb90b4" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" ... skipping 35 lines ... I0130 08:35:42.678838 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-s7scg-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710"} I0130 08:35:42.704040 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0130 08:35:42.704540 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:35:42.704574 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 to node k8s-agentpool-25433637-vmss000001 I0130 08:35:42.704646 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 false 0})] I0130 08:35:42.704867 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 false 0})]) I0130 08:35:42.872938 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:35:53.009430 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:35:53.009490 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:35:53.009518 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:35:53.009537 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:35:53.009632 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.305074131 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:35:53.009685 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 59 lines ... I0130 08:38:35.346941 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-s7scg-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710"} I0130 08:38:35.409596 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0130 08:38:35.410134 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:38:35.410174 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 to node k8s-agentpool-25433637-vmss000001 I0130 08:38:35.410216 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 false 0})] I0130 08:38:35.410268 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 false 0})]) I0130 08:38:35.597549 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:38:45.712549 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:38:45.712589 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:38:45.712612 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:38:45.712629 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:38:45.712841 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.302548699 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-a6218405-6a38-4adb-94ad-8dcc2bcfa710" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:38:45.712864 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0130 08:39:12.035801 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e59df705-dfdd-4553-9a0f-281564510f9e","csi.storage.k8s.io/pvc/name":"pvc-pj552","csi.storage.k8s.io/pvc/namespace":"azuredisk-4801","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com","tags":"disk=test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e"} I0130 08:39:12.066763 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1213 I0130 08:39:12.067195 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-e59df705-dfdd-4553-9a0f-281564510f9e. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e to node k8s-agentpool-25433637-vmss000000 (vmState Succeeded). I0130 08:39:12.067233 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e to node k8s-agentpool-25433637-vmss000000 I0130 08:39:12.067388 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e lun 0 to node k8s-agentpool-25433637-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e59df705-dfdd-4553-9a0f-281564510f9e false 0})] I0130 08:39:12.067468 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e59df705-dfdd-4553-9a0f-281564510f9e false 0})]) I0130 08:39:12.267239 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e59df705-dfdd-4553-9a0f-281564510f9e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:39:22.450151 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000000) successfully I0130 08:39:22.450195 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000000) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:39:22.450221 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e attached to node k8s-agentpool-25433637-vmss000000. I0130 08:39:22.450239 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e to node k8s-agentpool-25433637-vmss000000 successfully I0130 08:39:22.450287 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.383101987 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e59df705-dfdd-4553-9a0f-281564510f9e" node="k8s-agentpool-25433637-vmss000000" result_code="succeeded" I0130 08:39:22.450329 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 54 lines ... I0130 08:40:49.855138 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0130 08:40:49.912079 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0130 08:40:49.915840 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2280ed67-75a0-40db-9485-2063afcfc3ec. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:40:49.915873 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec to node k8s-agentpool-25433637-vmss000001 I0130 08:40:49.915938 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2280ed67-75a0-40db-9485-2063afcfc3ec false 0})] I0130 08:40:49.916013 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2280ed67-75a0-40db-9485-2063afcfc3ec false 0})]) I0130 08:40:50.111430 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2280ed67-75a0-40db-9485-2063afcfc3ec false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:41:00.256348 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:41:00.256400 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:41:00.256453 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec attached to node k8s-agentpool-25433637-vmss000001. I0130 08:41:00.256474 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:41:00.256561 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.400983013 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-2280ed67-75a0-40db-9485-2063afcfc3ec" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:41:00.256588 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 44 lines ... I0130 08:42:27.058376 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-02bccd9a-842d-4f71-b249-2eec6e16c280","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-48fv4-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1166","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280"} I0130 08:42:27.082365 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0130 08:42:27.082748 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-02bccd9a-842d-4f71-b249-2eec6e16c280. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:42:27.082791 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 to node k8s-agentpool-25433637-vmss000001 I0130 08:42:27.082826 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 false 0})] I0130 08:42:27.082863 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 false 0})]) I0130 08:42:27.250137 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:42:37.348371 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:42:37.348410 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:42:37.348432 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:42:37.348448 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:42:37.348492 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.265748262 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-02bccd9a-842d-4f71-b249-2eec6e16c280" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:42:37.348548 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0130 08:43:57.275768 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196","csi.storage.k8s.io/pvc/name":"pvc-hx62s","csi.storage.k8s.io/pvc/namespace":"azuredisk-783","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196"} I0130 08:43:57.309020 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0130 08:43:57.309528 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 to node k8s-agentpool-25433637-vmss000000 (vmState Succeeded). I0130 08:43:57.309561 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 to node k8s-agentpool-25433637-vmss000000 I0130 08:43:57.309741 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 lun 0 to node k8s-agentpool-25433637-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 false 0})] I0130 08:43:57.309788 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 false 0})]) I0130 08:43:57.480485 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:44:07.574180 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000000) successfully I0130 08:44:07.574225 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000000) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:44:07.574297 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 attached to node k8s-agentpool-25433637-vmss000000. I0130 08:44:07.574317 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 to node k8s-agentpool-25433637-vmss000000 successfully I0130 08:44:07.574410 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.264854707 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196" node="k8s-agentpool-25433637-vmss000000" result_code="succeeded" I0130 08:44:07.574431 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 35 lines ... I0130 08:45:15.214730 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196","csi.storage.k8s.io/pvc/name":"pvc-hx62s","csi.storage.k8s.io/pvc/namespace":"azuredisk-783","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196"} I0130 08:45:15.239632 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0130 08:45:15.240162 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:45:15.240211 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 to node k8s-agentpool-25433637-vmss000001 I0130 08:45:15.240269 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 false 0})] I0130 08:45:15.240359 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 false 0})]) I0130 08:45:15.399717 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:45:30.543811 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:45:30.543851 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:45:30.543872 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:45:30.543887 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:45:30.543932 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.303791833 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-67c3c28a-2b9a-498f-ad0c-2d8221e7f196" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:45:30.543951 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 34 lines ... I0130 08:47:46.620166 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-z5czzjqr:{}] I0130 08:47:46.645880 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0130 08:47:46.646044 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-b4246378-1bd0-4a70-bf15-985e2c47a701. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:47:46.646088 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 to node k8s-agentpool-25433637-vmss000001 I0130 08:47:46.646142 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701:%!s(*provider.AttachDiskOptions=&{None pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 false 0})] I0130 08:47:46.646181 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701:%!s(*provider.AttachDiskOptions=&{None pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 false 0})]) I0130 08:47:46.798296 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701:%!s(*provider.AttachDiskOptions=&{None pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:47:48.396918 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0130 08:47:48.397188 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-b4246378-1bd0-4a70-bf15-985e2c47a701","csi.storage.k8s.io/pvc/name":"pvc-vpsmf","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701"} I0130 08:47:48.421648 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1692 I0130 08:47:48.422200 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-b4246378-1bd0-4a70-bf15-985e2c47a701. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 to node k8s-agentpool-25433637-vmss000000 (vmState Succeeded). I0130 08:47:48.422235 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 to node k8s-agentpool-25433637-vmss000000 I0130 08:47:48.422584 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 lun 0 to node k8s-agentpool-25433637-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701:%!s(*provider.AttachDiskOptions=&{None pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 false 0})] I0130 08:47:48.422631 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701:%!s(*provider.AttachDiskOptions=&{None pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 false 0})]) I0130 08:47:48.605633 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701:%!s(*provider.AttachDiskOptions=&{None pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:47:58.694395 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000000) successfully I0130 08:47:58.694449 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000000) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:47:58.694468 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 attached to node k8s-agentpool-25433637-vmss000000. I0130 08:47:58.694537 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 to node k8s-agentpool-25433637-vmss000000 successfully I0130 08:47:58.694581 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.272380724 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701" node="k8s-agentpool-25433637-vmss000000" result_code="succeeded" I0130 08:47:58.694604 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 63 lines ... I0130 08:49:35.892597 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4f3d9166-35ac-4167-8736-b681fadfae26","csi.storage.k8s.io/pvc/name":"pvc-cwwwz","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26"} I0130 08:49:35.918131 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1222 I0130 08:49:35.918741 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-4f3d9166-35ac-4167-8736-b681fadfae26. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26 to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:49:35.918789 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26 to node k8s-agentpool-25433637-vmss000001 I0130 08:49:35.918939 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26 lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4f3d9166-35ac-4167-8736-b681fadfae26 false 0})] I0130 08:49:35.919163 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4f3d9166-35ac-4167-8736-b681fadfae26 false 0})]) I0130 08:49:36.069545 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4f3d9166-35ac-4167-8736-b681fadfae26 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:49:46.236775 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:49:46.236824 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:49:46.236847 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26 attached to node k8s-agentpool-25433637-vmss000001. I0130 08:49:46.236866 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26 to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:49:46.236913 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.318161943 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:49:46.236996 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 39 lines ... I0130 08:50:43.421648 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d"} I0130 08:50:43.453461 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0130 08:50:43.453952 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:50:43.453985 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d to node k8s-agentpool-25433637-vmss000001 I0130 08:50:43.454048 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d lun 0 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d false 0})] I0130 08:50:43.454156 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d false 0})]) I0130 08:50:43.589699 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:50:53.791257 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0130 08:50:53.796588 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:50:53.796905 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:50:53.797257 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d attached to node k8s-agentpool-25433637-vmss000001. I0130 08:50:53.797569 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:50:53.797842 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.343880401 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-69a0b24a-5bbf-4459-8a8a-b293ee30412d" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" ... skipping 19 lines ... I0130 08:51:08.495015 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5"} I0130 08:51:08.520037 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0130 08:51:08.520533 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 to node k8s-agentpool-25433637-vmss000000 (vmState Succeeded). I0130 08:51:08.520579 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 to node k8s-agentpool-25433637-vmss000000 I0130 08:51:08.520616 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 lun 0 to node k8s-agentpool-25433637-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 false 0})] I0130 08:51:08.520658 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 false 0})]) I0130 08:51:08.702424 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:51:18.807652 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000000) successfully I0130 08:51:18.807693 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000000) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:51:18.807719 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 attached to node k8s-agentpool-25433637-vmss000000. I0130 08:51:18.807736 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5 to node k8s-agentpool-25433637-vmss000000 successfully I0130 08:51:18.807785 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.287273508 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5" node="k8s-agentpool-25433637-vmss000000" result_code="succeeded" I0130 08:51:18.807811 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 10 lines ... I0130 08:51:35.137165 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-25433637-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b"} I0130 08:51:35.161383 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1256 I0130 08:51:35.161766 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b to node k8s-agentpool-25433637-vmss000001 (vmState Succeeded). I0130 08:51:35.161801 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b to node k8s-agentpool-25433637-vmss000001 I0130 08:51:35.161839 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b lun 1 to node k8s-agentpool-25433637-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b false 1})] I0130 08:51:35.161885 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b false 1})]) I0130 08:51:35.328777 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-z5czzjqr): vm(k8s-agentpool-25433637-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-z5czzjqr/providers/microsoft.compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0130 08:51:45.451283 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-z5czzjqr, k8s-agentpool-25433637-vmss, k8s-agentpool-25433637-vmss000001) successfully I0130 08:51:45.451348 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-25433637-vmss, kubetest-z5czzjqr, k8s-agentpool-25433637-vmss000001) for cacheKey(kubetest-z5czzjqr/k8s-agentpool-25433637-vmss) updated successfully I0130 08:51:45.451367 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b attached to node k8s-agentpool-25433637-vmss000001. I0130 08:51:45.451382 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b to node k8s-agentpool-25433637-vmss000001 successfully I0130 08:51:45.451436 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.28965723 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-z5czzjqr" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-e7e994dd-d1a5-4192-902d-aaf314b2ca0b" node="k8s-agentpool-25433637-vmss000001" result_code="succeeded" I0130 08:51:45.451458 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 12 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0130 07:58:07.455586 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 e2e-test I0130 07:58:07.457448 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 07:58:07.488403 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 07:58:07.488430 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 07:58:07.488439 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 07:58:07.488478 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 07:58:07.489328 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 07:58:07.489380 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 07:58:07.489414 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0130 07:58:07.489819 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0130 07:58:07.489835 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0130 07:58:07.489842 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0130 07:58:07.489935 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0130 07:58:07.489970 1 azuredisk.go:193] disable UseInstanceMetadata for controller I0130 07:58:07.489980 1 azuredisk.go:205] cloud: AzurePublicCloud, location: westus2, rg: kubetest-z5czzjqr, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-25433637-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0130 07:58:07.494172 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount1882093550' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount1882093550: must be superuser to unmount. I0130 07:58:07.494271 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0130 07:58:07.494348 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0130 07:58:07.494358 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0130 07:58:07.494364 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0130 07:58:07.494370 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0130 07:58:07.494376 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 62 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0130 07:57:58.790370 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 e2e-test I0130 07:57:58.790942 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 07:57:58.831442 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 07:57:58.831472 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 07:57:58.831482 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 07:57:58.831513 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 07:57:58.832284 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 07:57:58.832346 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 07:57:58.832392 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 68 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0130 07:58:03.751025 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 e2e-test I0130 07:58:03.751647 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 07:58:03.780037 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 07:58:03.780060 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 07:58:03.780069 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 07:58:03.780098 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 07:58:03.780913 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 07:58:03.780965 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 07:58:03.781001 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 243 lines ... I0130 08:48:56.198444 1 utils.go:84] GRPC response: {} I0130 08:48:56.233950 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0130 08:48:56.233972 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701"} I0130 08:48:56.234025 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 I0130 08:48:56.234050 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0130 08:48:56.234063 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 I0130 08:48:56.236635 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 I0130 08:48:56.236653 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701" I0130 08:48:56.236735 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 successfully I0130 08:48:56.236754 1 utils.go:84] GRPC response: {} I0130 08:51:24.207221 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:51:24.207246 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-60c54dca-4f74-4e6e-9eb5-9cffa3bee2f5"} I0130 08:51:25.882537 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 33 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0130 07:58:01.323300 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.26.2-3d368a1217946b8b3c3bd47a4f8fe2de87227460 e2e-test I0130 07:58:01.323954 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0130 07:58:01.369358 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0130 07:58:01.369387 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0130 07:58:01.369398 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0130 07:58:01.369449 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0130 07:58:01.371270 1 azure_auth.go:253] Using AzurePublicCloud environment I0130 07:58:01.371352 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0130 07:58:01.371415 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 188 lines ... I0130 08:03:21.110916 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0130 08:03:21.127823 1 mount_linux.go:570] Output: "" I0130 08:03:21.127874 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/disk/azure/scsi1/lun0] I0130 08:03:21.589171 1 mount_linux.go:539] Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount I0130 08:03:21.589211 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount I0130 08:03:21.589238 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount) E0130 08:03:21.608910 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0130 08:03:21.609011 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0130 08:03:22.182423 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:03:22.182450 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-11934eb2-6d55-4056-a6ad-52e90632a93d","csi.storage.k8s.io/pvc/name":"pvc-69g2b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d"} I0130 08:03:23.994048 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0130 08:03:23.994101 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0130 08:03:23.994522 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount with mount options([invalid mount options]) I0130 08:03:23.994552 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0130 08:03:24.003278 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0130 08:03:24.003310 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0130 08:03:24.014917 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount I0130 08:03:24.014966 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount) E0130 08:03:24.029271 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0130 08:03:24.029329 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0130 08:03:25.062774 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:03:25.062803 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-11934eb2-6d55-4056-a6ad-52e90632a93d","csi.storage.k8s.io/pvc/name":"pvc-69g2b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d"} I0130 08:03:26.867067 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0130 08:03:26.867106 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0130 08:03:26.867444 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount with mount options([invalid mount options]) I0130 08:03:26.867461 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0130 08:03:26.880787 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0130 08:03:26.880831 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0130 08:03:26.897388 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount I0130 08:03:26.897881 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount) E0130 08:03:26.916284 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0130 08:03:26.916355 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0130 08:03:28.945978 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:03:28.946018 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-11934eb2-6d55-4056-a6ad-52e90632a93d","csi.storage.k8s.io/pvc/name":"pvc-69g2b","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d"} I0130 08:03:30.741462 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0130 08:03:30.741512 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0130 08:03:30.742026 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount with mount options([invalid mount options]) I0130 08:03:30.742059 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0130 08:03:30.752149 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0130 08:03:30.752177 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0130 08:03:30.767511 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount I0130 08:03:30.767809 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount) E0130 08:03:30.785828 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0130 08:03:30.785883 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-11934eb2-6d55-4056-a6ad-52e90632a93d/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0130 08:04:28.873930 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:04:28.873957 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef","csi.storage.k8s.io/pvc/name":"pvc-fs7kd","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef"} I0130 08:04:30.646065 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0130 08:04:30.646114 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0130 08:04:30.646134 1 utils.go:84] GRPC response: {} I0130 08:04:30.667082 1 utils.go:77] GRPC call: /csi.v1.Node/NodePublishVolume ... skipping 16 lines ... I0130 08:04:35.913597 1 utils.go:84] GRPC response: {} I0130 08:04:35.943368 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0130 08:04:35.943395 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef"} I0130 08:04:35.943476 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef I0130 08:04:35.943501 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0130 08:04:35.943516 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef I0130 08:04:35.945614 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef I0130 08:04:35.945646 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef" I0130 08:04:35.945783 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-cd8d3b0f-331d-4879-958e-874effc3a1ef successfully I0130 08:04:35.945798 1 utils.go:84] GRPC response: {} I0130 08:05:36.991901 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:05:36.991931 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6","csi.storage.k8s.io/pvc/name":"pvc-4zhb8","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-d493fc61-a074-11ed-822b-967d0a096fd9/providers/Microsoft.Compute/disks/pvc-69eaf15a-209e-4c81-b911-39f51e3a30b6"} I0130 08:05:38.798376 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 648 lines ... I0130 08:25:34.858720 1 utils.go:84] GRPC response: {} I0130 08:25:34.931218 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0130 08:25:34.931516 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-3020cc46-3ebe-4703-9565-689771626960"} I0130 08:25:34.931632 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960 I0130 08:25:34.931788 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0130 08:25:34.931856 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960 I0130 08:25:34.934150 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960 I0130 08:25:34.934170 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960" I0130 08:25:34.934262 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-3020cc46-3ebe-4703-9565-689771626960 successfully I0130 08:25:34.934282 1 utils.go:84] GRPC response: {} I0130 08:27:15.654462 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:27:15.654504 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d","csi.storage.k8s.io/pvc/name":"pvc-zj6gs","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-fece4d6d-b425-4f8c-82f6-2ea2b0437e9d"} I0130 08:27:17.462563 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 537 lines ... I0130 08:48:56.599203 1 utils.go:84] GRPC response: {} I0130 08:48:56.628986 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0130 08:48:56.629009 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701"} I0130 08:48:56.629083 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 I0130 08:48:56.629105 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0130 08:48:56.629134 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 I0130 08:48:56.631734 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 I0130 08:48:56.631758 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701" I0130 08:48:56.631838 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-b4246378-1bd0-4a70-bf15-985e2c47a701 successfully I0130 08:48:56.631852 1 utils.go:84] GRPC response: {} I0130 08:49:51.689040 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0130 08:49:51.689071 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f3d9166-35ac-4167-8736-b681fadfae26/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4f3d9166-35ac-4167-8736-b681fadfae26","csi.storage.k8s.io/pvc/name":"pvc-cwwwz","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1675065485573-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-z5czzjqr/providers/Microsoft.Compute/disks/pvc-4f3d9166-35ac-4167-8736-b681fadfae26"} I0130 08:49:53.521298 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 665 lines ... cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="300"} 51 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="600"} 51 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="1200"} 51 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="+Inf"} 51 cloudprovider_azure_op_duration_seconds_sum{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 727.078351885 cloudprovider_azure_op_duration_seconds_count{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 51 # HELP cloudprovider_azure_op_failure_count [ALPHA] Number of failed Azure service operations # TYPE cloudprovider_azure_op_failure_count counter cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_delete_volume",resource_group="kubetest-z5czzjqr",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 6 # HELP disabled_metric_total [ALPHA] The count of disabled metrics. # TYPE disabled_metric_total counter disabled_metric_total 0 # HELP go_cgo_go_to_c_calls_calls_total Count of calls made from Go to C by the current process. ... skipping 67 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 52214 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 48649 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 259 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 35948 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 4720 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 272 lines ... [AfterSuite] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:165[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[1mRan 26 of 66 Specs in 3906.395 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m40 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (3906.40s) FAIL FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 3906.468s FAIL make: *** [Makefile:261: e2e-test] Error 1 2023/01/30 08:52:48 process.go:155: Step 'make e2e-test' finished in 1h6m48.161290586s 2023/01/30 08:52:48 aksengine_helpers.go:425: downloading /root/tmp2890212374/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/30 08:52:48 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/30 08:52:48 process.go:153: Running: chmod +x /root/tmp2890212374/log-dump.sh 2023/01/30 08:52:48 process.go:155: Step 'chmod +x /root/tmp2890212374/log-dump.sh' finished in 3.859265ms 2023/01/30 08:52:48 aksengine_helpers.go:425: downloading /root/tmp2890212374/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 63 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/30 08:53:23 process.go:155: Step 'bash -c /root/tmp2890212374/win-ci-logs-collector.sh kubetest-z5czzjqr.westus2.cloudapp.azure.com /root/tmp2890212374 /root/.ssh/id_rsa' finished in 3.858391ms 2023/01/30 08:53:23 aksengine.go:1141: Deleting resource group: kubetest-z5czzjqr. 2023/01/30 08:59:29 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/30 08:59:29 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/30 08:59:29 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 272.036383ms 2023/01/30 08:59:29 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 59a0a92ee487 ... skipping 4 lines ...