Recent runs || View in Spyglass
PR | andyzhangx: fix: switch base image to fix CVEs |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h12m |
Revision | cf89f397b97f324f2562dcb6f175612715802606 |
Refs |
1704 |
job-version | v1.27.0-alpha.1.73+8e642d3d0deab2 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.73+8e642d3d0deab2 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 107 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 123k 0 --:--:-- --:--:-- --:--:-- 123k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df not found: manifest unknown: manifest tagged by "v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=db7daf80cf6d95173fec925514d6a1d9169180df -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2023-01-29T04:13:20Z -extldflags "-static"" -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1097 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 865 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 216 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:19:12.229[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:19:12.23[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:19:12.29[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:19:12.29[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:19:12.351[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:19:12.352[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:19:12.412[0m Jan 29 04:19:12.413: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-98hrj" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 29 04:19:12.471: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 58.508272ms Jan 29 04:19:14.530: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117344743s Jan 29 04:19:16.531: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118266397s Jan 29 04:19:18.531: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118961309s Jan 29 04:19:20.530: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117799268s Jan 29 04:19:22.532: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119507152s ... skipping 6 lines ... Jan 29 04:19:36.533: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 24.120418387s Jan 29 04:19:38.533: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 26.120780424s Jan 29 04:19:40.546: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 28.133669909s Jan 29 04:19:42.531: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.118373564s Jan 29 04:19:44.532: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.119115777s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:19:44.532[0m Jan 29 04:19:44.532: INFO: Pod "azuredisk-volume-tester-98hrj" satisfied condition "Succeeded or Failed" Jan 29 04:19:44.532: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-98hrj" Jan 29 04:19:44.633: INFO: Pod azuredisk-volume-tester-98hrj has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-98hrj in namespace azuredisk-8081 [38;5;243m01/29/23 04:19:44.633[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:19:44.759[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:19:44.817[0m ... skipping 44 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:19:12.229[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:19:12.23[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:19:12.29[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:19:12.29[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:19:12.351[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:19:12.352[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:19:12.412[0m Jan 29 04:19:12.413: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-98hrj" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 29 04:19:12.471: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 58.508272ms Jan 29 04:19:14.530: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117344743s Jan 29 04:19:16.531: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118266397s Jan 29 04:19:18.531: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118961309s Jan 29 04:19:20.530: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117799268s Jan 29 04:19:22.532: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119507152s ... skipping 6 lines ... Jan 29 04:19:36.533: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 24.120418387s Jan 29 04:19:38.533: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 26.120780424s Jan 29 04:19:40.546: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 28.133669909s Jan 29 04:19:42.531: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.118373564s Jan 29 04:19:44.532: INFO: Pod "azuredisk-volume-tester-98hrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.119115777s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:19:44.532[0m Jan 29 04:19:44.532: INFO: Pod "azuredisk-volume-tester-98hrj" satisfied condition "Succeeded or Failed" Jan 29 04:19:44.532: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-98hrj" Jan 29 04:19:44.633: INFO: Pod azuredisk-volume-tester-98hrj has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-98hrj in namespace azuredisk-8081 [38;5;243m01/29/23 04:19:44.633[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:19:44.759[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:19:44.817[0m ... skipping 39 lines ... Jan 29 04:20:28.737: INFO: PersistentVolumeClaim pvc-qf5gn found but phase is Pending instead of Bound. Jan 29 04:20:30.797: INFO: PersistentVolumeClaim pvc-qf5gn found and phase=Bound (4.177193242s) [1mSTEP:[0m checking the PVC [38;5;243m01/29/23 04:20:30.797[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:20:30.857[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:20:30.915[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:20:30.915[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/29/23 04:20:30.974[0m Jan 29 04:20:30.974: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xv2w7" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 29 04:20:31.032: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.131711ms Jan 29 04:20:33.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117396775s Jan 29 04:20:35.093: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118712192s Jan 29 04:20:37.092: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117744174s Jan 29 04:20:39.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116917053s Jan 29 04:20:41.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117206787s Jan 29 04:20:43.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11689994s Jan 29 04:20:45.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117090383s Jan 29 04:20:47.092: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117911372s Jan 29 04:20:49.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.11743366s Jan 29 04:20:51.092: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.117964118s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:20:51.092[0m Jan 29 04:20:51.092: INFO: Pod "azuredisk-volume-tester-xv2w7" satisfied condition "Succeeded or Failed" Jan 29 04:20:51.092: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-xv2w7" Jan 29 04:20:51.154: INFO: Pod azuredisk-volume-tester-xv2w7 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xv2w7 in namespace azuredisk-2540 [38;5;243m01/29/23 04:20:51.154[0m Jan 29 04:20:51.221: INFO: deleting PVC "azuredisk-2540"/"pvc-qf5gn" Jan 29 04:20:51.221: INFO: Deleting PersistentVolumeClaim "pvc-qf5gn" ... skipping 38 lines ... Jan 29 04:20:28.737: INFO: PersistentVolumeClaim pvc-qf5gn found but phase is Pending instead of Bound. Jan 29 04:20:30.797: INFO: PersistentVolumeClaim pvc-qf5gn found and phase=Bound (4.177193242s) [1mSTEP:[0m checking the PVC [38;5;243m01/29/23 04:20:30.797[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:20:30.857[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:20:30.915[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:20:30.915[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/29/23 04:20:30.974[0m Jan 29 04:20:30.974: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xv2w7" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 29 04:20:31.032: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.131711ms Jan 29 04:20:33.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117396775s Jan 29 04:20:35.093: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118712192s Jan 29 04:20:37.092: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117744174s Jan 29 04:20:39.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116917053s Jan 29 04:20:41.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117206787s Jan 29 04:20:43.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11689994s Jan 29 04:20:45.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117090383s Jan 29 04:20:47.092: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117911372s Jan 29 04:20:49.091: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.11743366s Jan 29 04:20:51.092: INFO: Pod "azuredisk-volume-tester-xv2w7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.117964118s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:20:51.092[0m Jan 29 04:20:51.092: INFO: Pod "azuredisk-volume-tester-xv2w7" satisfied condition "Succeeded or Failed" Jan 29 04:20:51.092: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-xv2w7" Jan 29 04:20:51.154: INFO: Pod azuredisk-volume-tester-xv2w7 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-xv2w7 in namespace azuredisk-2540 [38;5;243m01/29/23 04:20:51.154[0m Jan 29 04:20:51.221: INFO: deleting PVC "azuredisk-2540"/"pvc-qf5gn" Jan 29 04:20:51.221: INFO: Deleting PersistentVolumeClaim "pvc-qf5gn" ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:21:32.899[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:21:32.899[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:21:32.961[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:21:32.961[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:21:33.023[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:21:33.023[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:21:33.083[0m Jan 29 04:21:33.083: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9mf8x" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 29 04:21:33.141: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 58.354162ms Jan 29 04:21:35.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119070176s Jan 29 04:21:37.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119385831s Jan 29 04:21:39.203: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11989111s Jan 29 04:21:41.201: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118541183s Jan 29 04:21:43.200: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117688693s ... skipping 17 lines ... Jan 29 04:22:19.201: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 46.118727141s Jan 29 04:22:21.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 48.118825605s Jan 29 04:22:23.201: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 50.118415009s Jan 29 04:22:25.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 52.119508883s Jan 29 04:22:27.200: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.11740925s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:22:27.2[0m Jan 29 04:22:27.201: INFO: Pod "azuredisk-volume-tester-9mf8x" satisfied condition "Succeeded or Failed" Jan 29 04:22:27.201: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-9mf8x" Jan 29 04:22:27.287: INFO: Pod azuredisk-volume-tester-9mf8x has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-9mf8x in namespace azuredisk-4728 [38;5;243m01/29/23 04:22:27.287[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:22:27.412[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:22:27.475[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:21:32.899[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:21:32.899[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:21:32.961[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:21:32.961[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:21:33.023[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:21:33.023[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:21:33.083[0m Jan 29 04:21:33.083: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9mf8x" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 29 04:21:33.141: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 58.354162ms Jan 29 04:21:35.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119070176s Jan 29 04:21:37.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119385831s Jan 29 04:21:39.203: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11989111s Jan 29 04:21:41.201: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118541183s Jan 29 04:21:43.200: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117688693s ... skipping 17 lines ... Jan 29 04:22:19.201: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 46.118727141s Jan 29 04:22:21.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 48.118825605s Jan 29 04:22:23.201: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 50.118415009s Jan 29 04:22:25.202: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Pending", Reason="", readiness=false. Elapsed: 52.119508883s Jan 29 04:22:27.200: INFO: Pod "azuredisk-volume-tester-9mf8x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.11740925s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:22:27.2[0m Jan 29 04:22:27.201: INFO: Pod "azuredisk-volume-tester-9mf8x" satisfied condition "Succeeded or Failed" Jan 29 04:22:27.201: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-9mf8x" Jan 29 04:22:27.287: INFO: Pod azuredisk-volume-tester-9mf8x has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-9mf8x in namespace azuredisk-4728 [38;5;243m01/29/23 04:22:27.287[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:22:27.412[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:22:27.475[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:23:09.227[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:23:09.227[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:23:09.288[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:23:09.288[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/29/23 04:23:09.348[0m Jan 29 04:23:31.467: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-gs75x" Jan 29 04:23:31.528: INFO: Error getting logs for pod azuredisk-volume-tester-gs75x: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-gs75x) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gs75x in namespace azuredisk-5466 [38;5;243m01/29/23 04:23:31.528[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:23:31.648[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:23:31.706[0m Jan 29 04:23:31.706: INFO: deleting PVC "azuredisk-5466"/"pvc-jcl9k" Jan 29 04:23:31.706: INFO: Deleting PersistentVolumeClaim "pvc-jcl9k" [1mSTEP:[0m waiting for claim's PV "pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f" to be deleted [38;5;243m01/29/23 04:23:31.766[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:23:09.227[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:23:09.227[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:23:09.288[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:23:09.288[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/29/23 04:23:09.348[0m Jan 29 04:23:31.467: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-gs75x" Jan 29 04:23:31.528: INFO: Error getting logs for pod azuredisk-volume-tester-gs75x: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-gs75x) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gs75x in namespace azuredisk-5466 [38;5;243m01/29/23 04:23:31.528[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:23:31.648[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:23:31.706[0m Jan 29 04:23:31.706: INFO: deleting PVC "azuredisk-5466"/"pvc-jcl9k" Jan 29 04:23:31.706: INFO: Deleting PersistentVolumeClaim "pvc-jcl9k" [1mSTEP:[0m waiting for claim's PV "pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f" to be deleted [38;5;243m01/29/23 04:23:31.766[0m ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:24:18.461[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:24:18.462[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:24:18.522[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:24:18.522[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:24:18.589[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:24:18.589[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:24:18.653[0m Jan 29 04:24:18.653: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nlxlg" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 29 04:24:18.710: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 57.234367ms Jan 29 04:24:20.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115894817s Jan 29 04:24:22.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116212301s Jan 29 04:24:24.767: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1146327s Jan 29 04:24:26.774: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12152209s Jan 29 04:24:28.768: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114948582s ... skipping 3 lines ... Jan 29 04:24:36.770: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117479713s Jan 29 04:24:38.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116655364s Jan 29 04:24:40.768: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.115457702s Jan 29 04:24:42.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 24.116455408s Jan 29 04:24:44.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.116418735s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:24:44.77[0m Jan 29 04:24:44.770: INFO: Pod "azuredisk-volume-tester-nlxlg" satisfied condition "Succeeded or Failed" Jan 29 04:24:44.770: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-nlxlg" Jan 29 04:24:44.838: INFO: Pod azuredisk-volume-tester-nlxlg has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-nlxlg in namespace azuredisk-2790 [38;5;243m01/29/23 04:24:44.838[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:24:44.961[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:24:45.019[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:24:18.461[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:24:18.462[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:24:18.522[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:24:18.522[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:24:18.589[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:24:18.589[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:24:18.653[0m Jan 29 04:24:18.653: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nlxlg" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 29 04:24:18.710: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 57.234367ms Jan 29 04:24:20.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115894817s Jan 29 04:24:22.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116212301s Jan 29 04:24:24.767: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1146327s Jan 29 04:24:26.774: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12152209s Jan 29 04:24:28.768: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114948582s ... skipping 3 lines ... Jan 29 04:24:36.770: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117479713s Jan 29 04:24:38.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116655364s Jan 29 04:24:40.768: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.115457702s Jan 29 04:24:42.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Pending", Reason="", readiness=false. Elapsed: 24.116455408s Jan 29 04:24:44.769: INFO: Pod "azuredisk-volume-tester-nlxlg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.116418735s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:24:44.77[0m Jan 29 04:24:44.770: INFO: Pod "azuredisk-volume-tester-nlxlg" satisfied condition "Succeeded or Failed" Jan 29 04:24:44.770: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-nlxlg" Jan 29 04:24:44.838: INFO: Pod azuredisk-volume-tester-nlxlg has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-nlxlg in namespace azuredisk-2790 [38;5;243m01/29/23 04:24:44.838[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:24:44.961[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:24:45.019[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:25:28.162[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:25:28.162[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:25:28.163[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:25:28.225[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:25:28.225[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:25:28.288[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:25:28.351[0m Jan 29 04:25:28.351: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kgzsr" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 29 04:25:28.410: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 59.02992ms Jan 29 04:25:30.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11738148s Jan 29 04:25:32.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117264851s Jan 29 04:25:34.471: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119647102s Jan 29 04:25:36.474: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122119446s Jan 29 04:25:38.470: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118074981s ... skipping 2 lines ... Jan 29 04:25:44.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117548053s Jan 29 04:25:46.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117408226s Jan 29 04:25:48.472: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120243186s Jan 29 04:25:50.471: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119476765s Jan 29 04:25:52.471: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119717836s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:25:52.471[0m Jan 29 04:25:52.471: INFO: Pod "azuredisk-volume-tester-kgzsr" satisfied condition "Succeeded or Failed" Jan 29 04:25:52.471: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-kgzsr" Jan 29 04:25:52.533: INFO: Pod azuredisk-volume-tester-kgzsr has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-kgzsr in namespace azuredisk-5356 [38;5;243m01/29/23 04:25:52.533[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:25:52.659[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:25:52.717[0m ... skipping 43 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:25:28.162[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:25:28.162[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:25:28.163[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:25:28.225[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:25:28.225[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:25:28.288[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:25:28.351[0m Jan 29 04:25:28.351: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kgzsr" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 29 04:25:28.410: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 59.02992ms Jan 29 04:25:30.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11738148s Jan 29 04:25:32.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117264851s Jan 29 04:25:34.471: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119647102s Jan 29 04:25:36.474: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122119446s Jan 29 04:25:38.470: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118074981s ... skipping 2 lines ... Jan 29 04:25:44.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117548053s Jan 29 04:25:46.469: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117408226s Jan 29 04:25:48.472: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 20.120243186s Jan 29 04:25:50.471: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119476765s Jan 29 04:25:52.471: INFO: Pod "azuredisk-volume-tester-kgzsr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119717836s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:25:52.471[0m Jan 29 04:25:52.471: INFO: Pod "azuredisk-volume-tester-kgzsr" satisfied condition "Succeeded or Failed" Jan 29 04:25:52.471: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-kgzsr" Jan 29 04:25:52.533: INFO: Pod azuredisk-volume-tester-kgzsr has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-kgzsr in namespace azuredisk-5356 [38;5;243m01/29/23 04:25:52.533[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:25:52.659[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:25:52.717[0m ... skipping 50 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:28:23.361[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:28:23.362[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:28:23.362[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:28:23.423[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:28:23.423[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:28:23.483[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:28:23.544[0m Jan 29 04:28:23.544: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pvvlt" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 29 04:28:23.603: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 58.900202ms Jan 29 04:28:25.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117377384s Jan 29 04:28:27.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117719866s Jan 29 04:28:29.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117417284s Jan 29 04:28:31.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117358503s Jan 29 04:28:33.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117786542s ... skipping 10 lines ... Jan 29 04:28:55.664: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 32.120065822s Jan 29 04:28:57.664: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 34.119564714s Jan 29 04:28:59.665: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 36.121140851s Jan 29 04:29:01.665: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 38.120300025s Jan 29 04:29:03.663: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.11914319s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:29:03.663[0m Jan 29 04:29:03.664: INFO: Pod "azuredisk-volume-tester-pvvlt" satisfied condition "Succeeded or Failed" Jan 29 04:29:03.664: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-pvvlt" Jan 29 04:29:03.751: INFO: Pod azuredisk-volume-tester-pvvlt has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pvvlt in namespace azuredisk-5194 [38;5;243m01/29/23 04:29:03.751[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:29:03.877[0m ... skipping 63 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:28:23.361[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:28:23.362[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:28:23.362[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:28:23.423[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:28:23.423[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:28:23.483[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:28:23.544[0m Jan 29 04:28:23.544: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pvvlt" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 29 04:28:23.603: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 58.900202ms Jan 29 04:28:25.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117377384s Jan 29 04:28:27.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117719866s Jan 29 04:28:29.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117417284s Jan 29 04:28:31.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117358503s Jan 29 04:28:33.662: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117786542s ... skipping 10 lines ... Jan 29 04:28:55.664: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 32.120065822s Jan 29 04:28:57.664: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 34.119564714s Jan 29 04:28:59.665: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 36.121140851s Jan 29 04:29:01.665: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Pending", Reason="", readiness=false. Elapsed: 38.120300025s Jan 29 04:29:03.663: INFO: Pod "azuredisk-volume-tester-pvvlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.11914319s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:29:03.663[0m Jan 29 04:29:03.664: INFO: Pod "azuredisk-volume-tester-pvvlt" satisfied condition "Succeeded or Failed" Jan 29 04:29:03.664: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-pvvlt" Jan 29 04:29:03.751: INFO: Pod azuredisk-volume-tester-pvvlt has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-pvvlt in namespace azuredisk-5194 [38;5;243m01/29/23 04:29:03.751[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:29:03.877[0m ... skipping 53 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:30:58.838[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:30:58.838[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:30:58.897[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:30:58.897[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:30:58.958[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:30:58.958[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/29/23 04:30:59.02[0m Jan 29 04:30:59.020: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-z85rk" in namespace "azuredisk-1353" to be "Error status code" Jan 29 04:30:59.079: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 58.667495ms Jan 29 04:31:01.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117607965s Jan 29 04:31:03.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117019316s Jan 29 04:31:05.139: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118676255s Jan 29 04:31:07.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1168675s Jan 29 04:31:09.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11704236s ... skipping 24 lines ... Jan 29 04:31:59.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.117711759s Jan 29 04:32:01.159: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.138703096s Jan 29 04:32:03.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.117617156s Jan 29 04:32:05.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118100856s Jan 29 04:32:07.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117960716s Jan 29 04:32:09.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.116460219s Jan 29 04:32:11.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.117099251s [1mSTEP:[0m Saw pod failure [38;5;243m01/29/23 04:32:11.137[0m Jan 29 04:32:11.137: INFO: Pod "azuredisk-volume-tester-z85rk" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/29/23 04:32:11.137[0m Jan 29 04:32:11.248: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-z85rk" Jan 29 04:32:11.314: INFO: Pod azuredisk-volume-tester-z85rk has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-z85rk in namespace azuredisk-1353 [38;5;243m01/29/23 04:32:11.314[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:32:11.437[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:30:58.838[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:30:58.838[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:30:58.897[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:30:58.897[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:30:58.958[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:30:58.958[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/29/23 04:30:59.02[0m Jan 29 04:30:59.020: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-z85rk" in namespace "azuredisk-1353" to be "Error status code" Jan 29 04:30:59.079: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 58.667495ms Jan 29 04:31:01.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117607965s Jan 29 04:31:03.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117019316s Jan 29 04:31:05.139: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118676255s Jan 29 04:31:07.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1168675s Jan 29 04:31:09.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11704236s ... skipping 24 lines ... Jan 29 04:31:59.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.117711759s Jan 29 04:32:01.159: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.138703096s Jan 29 04:32:03.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.117617156s Jan 29 04:32:05.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.118100856s Jan 29 04:32:07.138: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117960716s Jan 29 04:32:09.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.116460219s Jan 29 04:32:11.137: INFO: Pod "azuredisk-volume-tester-z85rk": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.117099251s [1mSTEP:[0m Saw pod failure [38;5;243m01/29/23 04:32:11.137[0m Jan 29 04:32:11.137: INFO: Pod "azuredisk-volume-tester-z85rk" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/29/23 04:32:11.137[0m Jan 29 04:32:11.248: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-z85rk" Jan 29 04:32:11.314: INFO: Pod azuredisk-volume-tester-z85rk has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-z85rk in namespace azuredisk-1353 [38;5;243m01/29/23 04:32:11.314[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:32:11.437[0m ... skipping 657 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:40:07.29[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:40:07.29[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:40:07.35[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:40:07.35[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:40:07.415[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:40:07.415[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:40:07.475[0m Jan 29 04:40:07.475: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hfc4q" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 29 04:40:07.532: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 57.183759ms Jan 29 04:40:09.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115700143s Jan 29 04:40:11.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117121238s Jan 29 04:40:13.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116673886s Jan 29 04:40:15.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11544493s Jan 29 04:40:17.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115355101s ... skipping 2 lines ... Jan 29 04:40:23.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116758932s Jan 29 04:40:25.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116319316s Jan 29 04:40:27.593: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117410178s Jan 29 04:40:29.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Running", Reason="", readiness=true. Elapsed: 22.115907691s Jan 29 04:40:31.597: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.121998343s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:40:31.597[0m Jan 29 04:40:31.598: INFO: Pod "azuredisk-volume-tester-hfc4q" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 04:40:31.598[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 04:40:36.598[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:40:36.716[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:40:36.716[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:40:36.778[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 04:40:36.778[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:40:36.839[0m Jan 29 04:40:36.839: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vn6pl" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 29 04:40:36.896: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 57.346918ms Jan 29 04:40:38.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115950885s Jan 29 04:40:40.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116002993s Jan 29 04:40:42.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116637621s Jan 29 04:40:44.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116111096s Jan 29 04:40:46.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11671216s ... skipping 10 lines ... Jan 29 04:41:08.957: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 32.118060692s Jan 29 04:41:10.956: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 34.117668678s Jan 29 04:41:12.956: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 36.11718104s Jan 29 04:41:14.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 38.11647403s Jan 29 04:41:16.956: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.117162012s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:41:16.956[0m Jan 29 04:41:16.956: INFO: Pod "azuredisk-volume-tester-vn6pl" satisfied condition "Succeeded or Failed" Jan 29 04:41:16.956: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-vn6pl" Jan 29 04:41:17.046: INFO: Pod azuredisk-volume-tester-vn6pl has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-vn6pl in namespace azuredisk-59 [38;5;243m01/29/23 04:41:17.046[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:41:17.167[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:41:17.225[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:40:07.29[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:40:07.29[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:40:07.35[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:40:07.35[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:40:07.415[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:40:07.415[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:40:07.475[0m Jan 29 04:40:07.475: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hfc4q" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 29 04:40:07.532: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 57.183759ms Jan 29 04:40:09.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115700143s Jan 29 04:40:11.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117121238s Jan 29 04:40:13.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116673886s Jan 29 04:40:15.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11544493s Jan 29 04:40:17.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115355101s ... skipping 2 lines ... Jan 29 04:40:23.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116758932s Jan 29 04:40:25.592: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116319316s Jan 29 04:40:27.593: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117410178s Jan 29 04:40:29.591: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Running", Reason="", readiness=true. Elapsed: 22.115907691s Jan 29 04:40:31.597: INFO: Pod "azuredisk-volume-tester-hfc4q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.121998343s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:40:31.597[0m Jan 29 04:40:31.598: INFO: Pod "azuredisk-volume-tester-hfc4q" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 04:40:31.598[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 04:40:36.598[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:40:36.716[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:40:36.716[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:40:36.778[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 04:40:36.778[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:40:36.839[0m Jan 29 04:40:36.839: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vn6pl" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 29 04:40:36.896: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 57.346918ms Jan 29 04:40:38.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115950885s Jan 29 04:40:40.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116002993s Jan 29 04:40:42.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116637621s Jan 29 04:40:44.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116111096s Jan 29 04:40:46.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11671216s ... skipping 10 lines ... Jan 29 04:41:08.957: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 32.118060692s Jan 29 04:41:10.956: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 34.117668678s Jan 29 04:41:12.956: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 36.11718104s Jan 29 04:41:14.955: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Pending", Reason="", readiness=false. Elapsed: 38.11647403s Jan 29 04:41:16.956: INFO: Pod "azuredisk-volume-tester-vn6pl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.117162012s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:41:16.956[0m Jan 29 04:41:16.956: INFO: Pod "azuredisk-volume-tester-vn6pl" satisfied condition "Succeeded or Failed" Jan 29 04:41:16.956: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-vn6pl" Jan 29 04:41:17.046: INFO: Pod azuredisk-volume-tester-vn6pl has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-vn6pl in namespace azuredisk-59 [38;5;243m01/29/23 04:41:17.046[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:41:17.167[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:41:17.225[0m ... skipping 46 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:42:09.453[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:42:09.453[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:42:09.516[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:42:09.517[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:42:09.577[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:42:09.578[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:42:09.638[0m Jan 29 04:42:09.638: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-wsm78" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 29 04:42:09.695: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 57.786504ms Jan 29 04:42:11.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116633906s Jan 29 04:42:13.755: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117232729s Jan 29 04:42:15.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116214793s Jan 29 04:42:17.755: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117409253s Jan 29 04:42:19.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115976583s ... skipping 2 lines ... Jan 29 04:42:25.753: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115185212s Jan 29 04:42:27.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116399879s Jan 29 04:42:29.755: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117249911s Jan 29 04:42:31.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116335095s Jan 29 04:42:33.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116332183s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:42:33.754[0m Jan 29 04:42:33.754: INFO: Pod "azuredisk-volume-tester-wsm78" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 04:42:33.754[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 04:42:38.755[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:42:38.871[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:42:38.871[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:42:38.932[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 04:42:38.932[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:42:38.992[0m Jan 29 04:42:38.992: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kw4gt" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 29 04:42:39.049: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 57.420034ms Jan 29 04:42:41.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117408499s Jan 29 04:42:43.110: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118045377s Jan 29 04:42:45.108: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116462366s Jan 29 04:42:47.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117115888s Jan 29 04:42:49.110: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118165569s Jan 29 04:42:51.110: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117963242s Jan 29 04:42:53.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117305845s Jan 29 04:42:55.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117372726s Jan 29 04:42:57.108: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116515816s Jan 29 04:42:59.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117177113s Jan 29 04:43:01.113: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.121588202s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:43:01.113[0m Jan 29 04:43:01.113: INFO: Pod "azuredisk-volume-tester-kw4gt" satisfied condition "Succeeded or Failed" Jan 29 04:43:01.114: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-kw4gt" Jan 29 04:43:01.178: INFO: Pod azuredisk-volume-tester-kw4gt has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-kw4gt in namespace azuredisk-2546 [38;5;243m01/29/23 04:43:01.178[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:43:01.328[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:43:01.386[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:42:09.453[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:42:09.453[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:42:09.516[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:42:09.517[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:42:09.577[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:42:09.578[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:42:09.638[0m Jan 29 04:42:09.638: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-wsm78" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 29 04:42:09.695: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 57.786504ms Jan 29 04:42:11.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116633906s Jan 29 04:42:13.755: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117232729s Jan 29 04:42:15.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116214793s Jan 29 04:42:17.755: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117409253s Jan 29 04:42:19.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115976583s ... skipping 2 lines ... Jan 29 04:42:25.753: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115185212s Jan 29 04:42:27.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116399879s Jan 29 04:42:29.755: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117249911s Jan 29 04:42:31.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116335095s Jan 29 04:42:33.754: INFO: Pod "azuredisk-volume-tester-wsm78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116332183s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:42:33.754[0m Jan 29 04:42:33.754: INFO: Pod "azuredisk-volume-tester-wsm78" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/29/23 04:42:33.754[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/29/23 04:42:38.755[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:42:38.871[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:42:38.871[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:42:38.932[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/29/23 04:42:38.932[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:42:38.992[0m Jan 29 04:42:38.992: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kw4gt" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 29 04:42:39.049: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 57.420034ms Jan 29 04:42:41.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117408499s Jan 29 04:42:43.110: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118045377s Jan 29 04:42:45.108: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116462366s Jan 29 04:42:47.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117115888s Jan 29 04:42:49.110: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118165569s Jan 29 04:42:51.110: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117963242s Jan 29 04:42:53.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117305845s Jan 29 04:42:55.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117372726s Jan 29 04:42:57.108: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.116515816s Jan 29 04:42:59.109: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117177113s Jan 29 04:43:01.113: INFO: Pod "azuredisk-volume-tester-kw4gt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.121588202s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:43:01.113[0m Jan 29 04:43:01.113: INFO: Pod "azuredisk-volume-tester-kw4gt" satisfied condition "Succeeded or Failed" Jan 29 04:43:01.114: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-kw4gt" Jan 29 04:43:01.178: INFO: Pod azuredisk-volume-tester-kw4gt has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-kw4gt in namespace azuredisk-2546 [38;5;243m01/29/23 04:43:01.178[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:43:01.328[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:43:01.386[0m ... skipping 56 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:43:53.834[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:43:53.834[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:43:53.894[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:43:53.894[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:43:53.952[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:43:53.952[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:43:54.025[0m Jan 29 04:43:54.025: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-l8rwg" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 29 04:43:54.082: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 57.110059ms Jan 29 04:43:56.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116841599s Jan 29 04:43:58.144: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118250396s Jan 29 04:44:00.143: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117893483s Jan 29 04:44:02.143: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117385854s Jan 29 04:44:04.141: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116099581s ... skipping 10 lines ... Jan 29 04:44:26.143: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 32.117361759s Jan 29 04:44:28.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 34.116609306s Jan 29 04:44:30.140: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115077292s Jan 29 04:44:32.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 38.117137017s Jan 29 04:44:34.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.116290865s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:44:34.142[0m Jan 29 04:44:34.142: INFO: Pod "azuredisk-volume-tester-l8rwg" satisfied condition "Succeeded or Failed" Jan 29 04:44:34.142: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-l8rwg" Jan 29 04:44:34.203: INFO: Pod azuredisk-volume-tester-l8rwg has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-l8rwg in namespace azuredisk-1598 [38;5;243m01/29/23 04:44:34.203[0m ... skipping 68 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:43:53.834[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:43:53.834[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:43:53.894[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:43:53.894[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:43:53.952[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:43:53.952[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:43:54.025[0m Jan 29 04:43:54.025: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-l8rwg" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 29 04:43:54.082: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 57.110059ms Jan 29 04:43:56.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116841599s Jan 29 04:43:58.144: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118250396s Jan 29 04:44:00.143: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117893483s Jan 29 04:44:02.143: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117385854s Jan 29 04:44:04.141: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116099581s ... skipping 10 lines ... Jan 29 04:44:26.143: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 32.117361759s Jan 29 04:44:28.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 34.116609306s Jan 29 04:44:30.140: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115077292s Jan 29 04:44:32.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Pending", Reason="", readiness=false. Elapsed: 38.117137017s Jan 29 04:44:34.142: INFO: Pod "azuredisk-volume-tester-l8rwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.116290865s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:44:34.142[0m Jan 29 04:44:34.142: INFO: Pod "azuredisk-volume-tester-l8rwg" satisfied condition "Succeeded or Failed" Jan 29 04:44:34.142: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-l8rwg" Jan 29 04:44:34.203: INFO: Pod azuredisk-volume-tester-l8rwg has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-l8rwg in namespace azuredisk-1598 [38;5;243m01/29/23 04:44:34.203[0m ... skipping 62 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:45:32.058[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:45:32.059[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:45:32.117[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:45:32.117[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:45:32.177[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:45:32.177[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:45:32.238[0m Jan 29 04:45:32.238: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-sk97b" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 29 04:45:32.295: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.237555ms Jan 29 04:45:34.360: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122064844s Jan 29 04:45:36.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114834952s Jan 29 04:45:38.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115434853s Jan 29 04:45:40.358: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120690835s Jan 29 04:45:42.354: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115836573s ... skipping 10 lines ... Jan 29 04:46:04.354: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.116290338s Jan 29 04:46:06.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.115589202s Jan 29 04:46:08.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115076225s Jan 29 04:46:10.355: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.116817197s Jan 29 04:46:12.355: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.117479271s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:46:12.355[0m Jan 29 04:46:12.355: INFO: Pod "azuredisk-volume-tester-sk97b" satisfied condition "Succeeded or Failed" Jan 29 04:46:12.355: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-sk97b" Jan 29 04:46:12.453: INFO: Pod azuredisk-volume-tester-sk97b has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.084489 seconds, 1.2GB/s hello world ... skipping 59 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:45:32.058[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:45:32.059[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:45:32.117[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:45:32.117[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:45:32.177[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:45:32.177[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:45:32.238[0m Jan 29 04:45:32.238: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-sk97b" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 29 04:45:32.295: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.237555ms Jan 29 04:45:34.360: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122064844s Jan 29 04:45:36.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114834952s Jan 29 04:45:38.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115434853s Jan 29 04:45:40.358: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120690835s Jan 29 04:45:42.354: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115836573s ... skipping 10 lines ... Jan 29 04:46:04.354: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.116290338s Jan 29 04:46:06.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.115589202s Jan 29 04:46:08.353: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115076225s Jan 29 04:46:10.355: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.116817197s Jan 29 04:46:12.355: INFO: Pod "azuredisk-volume-tester-sk97b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.117479271s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:46:12.355[0m Jan 29 04:46:12.355: INFO: Pod "azuredisk-volume-tester-sk97b" satisfied condition "Succeeded or Failed" Jan 29 04:46:12.355: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-sk97b" Jan 29 04:46:12.453: INFO: Pod azuredisk-volume-tester-sk97b has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.084489 seconds, 1.2GB/s hello world ... skipping 52 lines ... Jan 29 04:47:35.133: INFO: >>> kubeConfig: /root/tmp3263498711/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:47:35.135[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:47:35.135[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:47:35.194[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:47:35.195[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:47:35.255[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:47:35.315[0m Jan 29 04:47:35.315: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gh6qm" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 29 04:47:35.373: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 57.700072ms Jan 29 04:47:37.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118277887s Jan 29 04:47:39.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117887913s Jan 29 04:47:41.435: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11962384s Jan 29 04:47:43.432: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11673392s Jan 29 04:47:45.434: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118917466s ... skipping 2 lines ... Jan 29 04:47:51.432: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116848991s Jan 29 04:47:53.435: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120244607s Jan 29 04:47:55.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118013264s Jan 29 04:47:57.434: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119216204s Jan 29 04:47:59.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118001331s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:47:59.433[0m Jan 29 04:47:59.433: INFO: Pod "azuredisk-volume-tester-gh6qm" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 04:47:59.433[0m 2023/01/29 04:47:59 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/29 04:47:59 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-oomcbqvi [38;5;243m01/29/23 04:47:59.434[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-1aef3e75-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:47:59.434[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-1aef3e75-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:48:00.743[0m ... skipping 5 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:48:15.928[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:48:15.928[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:48:15.988[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:48:15.988[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:48:16.059[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 04:48:16.059[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:48:16.119[0m Jan 29 04:48:16.119: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gkwcs" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 29 04:48:16.176: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 57.300992ms Jan 29 04:48:18.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116721034s Jan 29 04:48:20.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116185232s Jan 29 04:48:22.234: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115594077s Jan 29 04:48:24.234: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115394829s Jan 29 04:48:26.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117040382s ... skipping 2 lines ... Jan 29 04:48:32.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115860516s Jan 29 04:48:34.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117051632s Jan 29 04:48:36.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117097672s Jan 29 04:48:38.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 22.115947143s Jan 29 04:48:40.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117090258s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:48:40.236[0m Jan 29 04:48:40.236: INFO: Pod "azuredisk-volume-tester-gkwcs" satisfied condition "Succeeded or Failed" Jan 29 04:48:40.236: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-gkwcs" Jan 29 04:48:40.351: INFO: Pod azuredisk-volume-tester-gkwcs has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gkwcs in namespace azuredisk-8582 [38;5;243m01/29/23 04:48:40.351[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:48:40.471[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:48:40.529[0m ... skipping 48 lines ... Jan 29 04:47:35.133: INFO: >>> kubeConfig: /root/tmp3263498711/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:47:35.135[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:47:35.135[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:47:35.194[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:47:35.195[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:47:35.255[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:47:35.315[0m Jan 29 04:47:35.315: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gh6qm" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 29 04:47:35.373: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 57.700072ms Jan 29 04:47:37.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118277887s Jan 29 04:47:39.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117887913s Jan 29 04:47:41.435: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11962384s Jan 29 04:47:43.432: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11673392s Jan 29 04:47:45.434: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118917466s ... skipping 2 lines ... Jan 29 04:47:51.432: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116848991s Jan 29 04:47:53.435: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120244607s Jan 29 04:47:55.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118013264s Jan 29 04:47:57.434: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119216204s Jan 29 04:47:59.433: INFO: Pod "azuredisk-volume-tester-gh6qm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118001331s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:47:59.433[0m Jan 29 04:47:59.433: INFO: Pod "azuredisk-volume-tester-gh6qm" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 04:47:59.433[0m [1mSTEP:[0m Prow test resource group: kubetest-oomcbqvi [38;5;243m01/29/23 04:47:59.434[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-1aef3e75-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:47:59.434[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-1aef3e75-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:48:00.743[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/29/23 04:48:00.743[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/29/23 04:48:00.743[0m ... skipping 3 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:48:15.928[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:48:15.928[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:48:15.988[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:48:15.988[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:48:16.059[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 04:48:16.059[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:48:16.119[0m Jan 29 04:48:16.119: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gkwcs" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 29 04:48:16.176: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 57.300992ms Jan 29 04:48:18.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116721034s Jan 29 04:48:20.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116185232s Jan 29 04:48:22.234: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115594077s Jan 29 04:48:24.234: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115394829s Jan 29 04:48:26.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117040382s ... skipping 2 lines ... Jan 29 04:48:32.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115860516s Jan 29 04:48:34.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117051632s Jan 29 04:48:36.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117097672s Jan 29 04:48:38.235: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Pending", Reason="", readiness=false. Elapsed: 22.115947143s Jan 29 04:48:40.236: INFO: Pod "azuredisk-volume-tester-gkwcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117090258s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:48:40.236[0m Jan 29 04:48:40.236: INFO: Pod "azuredisk-volume-tester-gkwcs" satisfied condition "Succeeded or Failed" Jan 29 04:48:40.236: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-gkwcs" Jan 29 04:48:40.351: INFO: Pod azuredisk-volume-tester-gkwcs has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gkwcs in namespace azuredisk-8582 [38;5;243m01/29/23 04:48:40.351[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:48:40.471[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:48:40.529[0m ... skipping 47 lines ... Jan 29 04:50:39.458: INFO: >>> kubeConfig: /root/tmp3263498711/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:50:39.46[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:50:39.46[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:50:39.52[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:50:39.52[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:50:39.581[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:50:39.643[0m Jan 29 04:50:39.643: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-57vpc" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 29 04:50:39.700: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 57.685739ms Jan 29 04:50:41.759: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116662535s Jan 29 04:50:43.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117029168s Jan 29 04:50:45.763: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119990745s Jan 29 04:50:47.764: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121033626s Jan 29 04:50:49.759: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115946188s ... skipping 2 lines ... Jan 29 04:50:55.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117552933s Jan 29 04:50:57.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117048523s Jan 29 04:50:59.761: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117879067s Jan 29 04:51:01.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117534435s Jan 29 04:51:03.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117628424s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:51:03.76[0m Jan 29 04:51:03.761: INFO: Pod "azuredisk-volume-tester-57vpc" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 04:51:03.761[0m 2023/01/29 04:51:03 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/29 04:51:03 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-oomcbqvi [38;5;243m01/29/23 04:51:03.762[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-88cd67f4-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:51:03.762[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-88cd67f4-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:51:04.6[0m ... skipping 13 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:51:23.964[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:51:24.024[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:51:24.024[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:51:24.085[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/29/23 04:51:24.085[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 04:51:24.086[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:51:24.146[0m Jan 29 04:51:24.146: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mzpx2" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 29 04:51:24.203: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 57.434661ms Jan 29 04:51:26.262: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115828214s Jan 29 04:51:28.264: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118179418s Jan 29 04:51:30.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117518439s Jan 29 04:51:32.262: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116487508s Jan 29 04:51:34.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117201641s Jan 29 04:51:36.264: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.118030854s Jan 29 04:51:38.264: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118454763s Jan 29 04:51:40.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116769008s Jan 29 04:51:42.266: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120186743s Jan 29 04:51:44.261: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115664383s Jan 29 04:51:46.262: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116445128s Jan 29 04:51:48.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Failed", Reason="", readiness=false. Elapsed: 24.117141565s Jan 29 04:51:48.263: INFO: Unexpected error: <*fmt.wrapError | 0xc000291760>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-mzpx2 to be Succeeded or Failed: pod \"azuredisk-volume-tester-mzpx2\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc0004314c0>{ s: "pod \"azuredisk-volume-tester-mzpx2\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 29 04:51:48.263: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-mzpx2 to be Succeeded or Failed: pod "azuredisk-volume-tester-mzpx2" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000c21d78, {0x270dda0, 0xc000c7b520}, {0x26f8fa0, 0xc000cfaa00}, 0xc000be4580?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 42 lines ... Jan 29 04:53:56.411: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7726 to be removed Jan 29 04:53:56.467: INFO: Claim "azuredisk-7726" in namespace "pvc-wx9ff" doesn't exist in the system Jan 29 04:53:56.467: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-wpsbh [1mSTEP:[0m dump namespace information after failure [38;5;243m01/29/23 04:53:56.527[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/29/23 04:53:56.527[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [198.015 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:747[0m ... skipping 7 lines ... Jan 29 04:50:39.458: INFO: >>> kubeConfig: /root/tmp3263498711/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:50:39.46[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:50:39.46[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:50:39.52[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:50:39.52[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:50:39.581[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:50:39.643[0m Jan 29 04:50:39.643: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-57vpc" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 29 04:50:39.700: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 57.685739ms Jan 29 04:50:41.759: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116662535s Jan 29 04:50:43.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117029168s Jan 29 04:50:45.763: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119990745s Jan 29 04:50:47.764: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121033626s Jan 29 04:50:49.759: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115946188s ... skipping 2 lines ... Jan 29 04:50:55.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117552933s Jan 29 04:50:57.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117048523s Jan 29 04:50:59.761: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.117879067s Jan 29 04:51:01.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117534435s Jan 29 04:51:03.760: INFO: Pod "azuredisk-volume-tester-57vpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117628424s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:51:03.76[0m Jan 29 04:51:03.761: INFO: Pod "azuredisk-volume-tester-57vpc" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/29/23 04:51:03.761[0m [1mSTEP:[0m Prow test resource group: kubetest-oomcbqvi [38;5;243m01/29/23 04:51:03.762[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-88cd67f4-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:51:03.762[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-88cd67f4-9f90-11ed-b28e-027493caca65 [38;5;243m01/29/23 04:51:04.6[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/29/23 04:51:04.6[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/29/23 04:51:04.6[0m ... skipping 11 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:51:23.964[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:51:24.024[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:51:24.024[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 04:51:24.085[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/29/23 04:51:24.085[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/29/23 04:51:24.086[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:51:24.146[0m Jan 29 04:51:24.146: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mzpx2" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 29 04:51:24.203: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 57.434661ms Jan 29 04:51:26.262: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115828214s Jan 29 04:51:28.264: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118179418s Jan 29 04:51:30.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117518439s Jan 29 04:51:32.262: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116487508s Jan 29 04:51:34.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117201641s Jan 29 04:51:36.264: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.118030854s Jan 29 04:51:38.264: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118454763s Jan 29 04:51:40.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.116769008s Jan 29 04:51:42.266: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120186743s Jan 29 04:51:44.261: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115664383s Jan 29 04:51:46.262: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.116445128s Jan 29 04:51:48.263: INFO: Pod "azuredisk-volume-tester-mzpx2": Phase="Failed", Reason="", readiness=false. Elapsed: 24.117141565s Jan 29 04:51:48.263: INFO: Unexpected error: <*fmt.wrapError | 0xc000291760>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-mzpx2 to be Succeeded or Failed: pod \"azuredisk-volume-tester-mzpx2\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc0004314c0>{ s: "pod \"azuredisk-volume-tester-mzpx2\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 29 04:51:48.263: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-mzpx2 to be Succeeded or Failed: pod "azuredisk-volume-tester-mzpx2" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x2253857?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000c21d78, {0x270dda0, 0xc000c7b520}, {0x26f8fa0, 0xc000cfaa00}, 0xc000be4580?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 43 lines ... Jan 29 04:53:56.467: INFO: Claim "azuredisk-7726" in namespace "pvc-wx9ff" doesn't exist in the system Jan 29 04:53:56.467: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-wpsbh [1mSTEP:[0m dump namespace information after failure [38;5;243m01/29/23 04:53:56.527[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/29/23 04:53:56.527[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 29 04:51:48.263: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-mzpx2 to be Succeeded or Failed: pod "azuredisk-volume-tester-mzpx2" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:51:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.46 PodIPs:[{IP:10.248.0.46}] StartTime:2023-01-29 04:51:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:51:46 +0000 UTC,FinishedAt:2023-01-29 04:51:46 +0000 UTC,ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://08a49db11f66821e8c7fcbe0cf2f9ed0b89a76c481fd76b1466f1b82c95e2e40 Started:0xc00051ae70}] QOSClass:BestEffort EphemeralContainerStatuses:[]}[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0000303c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 25 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:53:57.657[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:53:57.716[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:53:57.716[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:53:57.774[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:53:57.774[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:53:57.831[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:53:57.89[0m Jan 29 04:53:57.890: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t7ctc" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 29 04:53:57.946: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.323577ms Jan 29 04:54:00.004: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113822997s Jan 29 04:54:02.012: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122478458s Jan 29 04:54:04.005: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115478042s Jan 29 04:54:06.006: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115824803s Jan 29 04:54:08.003: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113295288s ... skipping 9 lines ... Jan 29 04:54:28.003: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.113223107s Jan 29 04:54:30.012: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.122139765s Jan 29 04:54:32.004: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.113968517s Jan 29 04:54:34.005: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.114835379s Jan 29 04:54:36.009: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.118968036s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:54:36.009[0m Jan 29 04:54:36.009: INFO: Pod "azuredisk-volume-tester-t7ctc" satisfied condition "Succeeded or Failed" Jan 29 04:54:36.009: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-t7ctc" Jan 29 04:54:36.072: INFO: Pod azuredisk-volume-tester-t7ctc has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-t7ctc in namespace azuredisk-3086 [38;5;243m01/29/23 04:54:36.072[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:54:36.2[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:54:36.256[0m ... skipping 70 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:53:57.657[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 04:53:57.716[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 04:53:57.716[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 04:53:57.774[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 04:53:57.774[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 04:53:57.831[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 04:53:57.89[0m Jan 29 04:53:57.890: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t7ctc" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 29 04:53:57.946: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.323577ms Jan 29 04:54:00.004: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113822997s Jan 29 04:54:02.012: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122478458s Jan 29 04:54:04.005: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115478042s Jan 29 04:54:06.006: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115824803s Jan 29 04:54:08.003: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113295288s ... skipping 9 lines ... Jan 29 04:54:28.003: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.113223107s Jan 29 04:54:30.012: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.122139765s Jan 29 04:54:32.004: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.113968517s Jan 29 04:54:34.005: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.114835379s Jan 29 04:54:36.009: INFO: Pod "azuredisk-volume-tester-t7ctc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.118968036s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 04:54:36.009[0m Jan 29 04:54:36.009: INFO: Pod "azuredisk-volume-tester-t7ctc" satisfied condition "Succeeded or Failed" Jan 29 04:54:36.009: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-t7ctc" Jan 29 04:54:36.072: INFO: Pod azuredisk-volume-tester-t7ctc has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-t7ctc in namespace azuredisk-3086 [38;5;243m01/29/23 04:54:36.072[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 04:54:36.2[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 04:54:36.256[0m ... skipping 1012 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 05:09:19.818[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 05:09:19.818[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 05:09:19.878[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 05:09:19.879[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 05:09:19.94[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 05:09:19.94[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 05:09:20.002[0m Jan 29 05:09:20.002: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-crf9l" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 29 05:09:20.060: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 57.800412ms Jan 29 05:09:22.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116774457s Jan 29 05:09:24.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118311569s Jan 29 05:09:26.118: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116350059s Jan 29 05:09:28.118: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116375615s Jan 29 05:09:30.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11767902s ... skipping 2 lines ... Jan 29 05:09:36.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117462075s Jan 29 05:09:38.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117716646s Jan 29 05:09:40.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116593646s Jan 29 05:09:42.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118241946s Jan 29 05:09:44.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116786415s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 05:09:44.119[0m Jan 29 05:09:44.119: INFO: Pod "azuredisk-volume-tester-crf9l" satisfied condition "Succeeded or Failed" Jan 29 05:09:44.119: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-crf9l" Jan 29 05:09:44.223: INFO: Pod azuredisk-volume-tester-crf9l has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-crf9l in namespace azuredisk-1092 [38;5;243m01/29/23 05:09:44.223[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 05:09:44.343[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 05:09:44.401[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/29/23 05:09:19.818[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/29/23 05:09:19.818[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/29/23 05:09:19.878[0m [1mSTEP:[0m creating a PVC [38;5;243m01/29/23 05:09:19.879[0m [1mSTEP:[0m setting up the pod [38;5;243m01/29/23 05:09:19.94[0m [1mSTEP:[0m deploying the pod [38;5;243m01/29/23 05:09:19.94[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/29/23 05:09:20.002[0m Jan 29 05:09:20.002: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-crf9l" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 29 05:09:20.060: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 57.800412ms Jan 29 05:09:22.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116774457s Jan 29 05:09:24.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118311569s Jan 29 05:09:26.118: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116350059s Jan 29 05:09:28.118: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116375615s Jan 29 05:09:30.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11767902s ... skipping 2 lines ... Jan 29 05:09:36.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117462075s Jan 29 05:09:38.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117716646s Jan 29 05:09:40.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116593646s Jan 29 05:09:42.120: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118241946s Jan 29 05:09:44.119: INFO: Pod "azuredisk-volume-tester-crf9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116786415s [1mSTEP:[0m Saw pod success [38;5;243m01/29/23 05:09:44.119[0m Jan 29 05:09:44.119: INFO: Pod "azuredisk-volume-tester-crf9l" satisfied condition "Succeeded or Failed" Jan 29 05:09:44.119: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-crf9l" Jan 29 05:09:44.223: INFO: Pod azuredisk-volume-tester-crf9l has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-crf9l in namespace azuredisk-1092 [38;5;243m01/29/23 05:09:44.223[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/29/23 05:09:44.343[0m [1mSTEP:[0m checking the PV [38;5;243m01/29/23 05:09:44.401[0m ... skipping 93 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 04:19:04.189757 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df e2e-test I0129 04:19:04.190398 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:19:04.221972 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:19:04.221999 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:19:04.222009 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:19:04.222248 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:19:04.223206 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:19:04.223275 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:19:04.223308 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0129 04:19:04.224055 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0129 04:19:04.224084 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0129 04:19:04.224132 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0129 04:19:04.224269 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0129 04:19:04.224340 1 azuredisk.go:192] disable UseInstanceMetadata for controller I0129 04:19:04.224350 1 azuredisk.go:204] cloud: AzurePublicCloud, location: westus2, rg: kubetest-oomcbqvi, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-18521412-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0129 04:19:04.227821 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount841171153' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount841171153: must be superuser to unmount. I0129 04:19:04.227863 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0129 04:19:04.227941 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0129 04:19:04.227952 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0129 04:19:04.227958 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0129 04:19:04.227965 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0129 04:19:04.227971 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 68 lines ... I0129 04:19:15.613284 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 04:19:15.713411 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 32358 I0129 04:19:15.717252 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:19:15.717467 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b to node k8s-agentpool-18521412-vmss000000 I0129 04:19:15.717529 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b:%!s(*provider.AttachDiskOptions=&{None pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b false 0})] I0129 04:19:15.717683 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b:%!s(*provider.AttachDiskOptions=&{None pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b false 0})]) I0129 04:19:16.700848 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b:%!s(*provider.AttachDiskOptions=&{None pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:19:26.809610 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:19:26.809654 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:19:26.809680 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b attached to node k8s-agentpool-18521412-vmss000000. I0129 04:19:26.809697 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:19:26.809749 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.282561971 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:19:26.809766 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0129 04:20:21.077532 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b) returned with <nil> I0129 04:20:21.077590 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.16814867 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ef358045-6f64-4b72-b800-1548dbc1ed9b" result_code="succeeded" I0129 04:20:21.077612 1 utils.go:84] GRPC response: {} I0129 04:20:26.605110 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0129 04:20:26.605430 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2","csi.storage.k8s.io/pvc/name":"pvc-qf5gn","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","networkAccessPolicy":"DenyAll","skuName":"Standard_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0129 04:20:26.606771 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:20:26.613082 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:20:26.613110 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:20:26.613119 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:20:26.613418 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:20:26.613936 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:20:26.614086 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:20:26.614174 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0129 04:20:31.039330 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2","csi.storage.k8s.io/pvc/name":"pvc-qf5gn","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","enableasyncattach":"false","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2"} I0129 04:20:31.059966 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1218 I0129 04:20:31.060456 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:20:31.060492 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 to node k8s-agentpool-18521412-vmss000000 I0129 04:20:31.060558 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 false 0})] I0129 04:20:31.060635 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 false 0})]) I0129 04:20:31.189089 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:20:41.321091 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:20:41.321130 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:20:41.321153 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:20:41.321170 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:20:41.321245 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.260786273 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:20:41.321296 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 26 lines ... I0129 04:21:27.449713 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2) returned with <nil> I0129 04:21:27.449754 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.160947452 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1770c1cd-e82c-489c-b138-c13e3cc72aa2" result_code="succeeded" I0129 04:21:27.449773 1 utils.go:84] GRPC response: {} I0129 04:21:33.075445 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0129 04:21:33.075685 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}]},"capacity_range":{"required_bytes":1099511627776},"name":"pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1","parameters":{"csi.storage.k8s.io/pv/name":"pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1","csi.storage.k8s.io/pvc/name":"pvc-ncr6h","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","perfProfile":"Basic","skuName":"Premium_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0129 04:21:33.076475 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:21:33.083956 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:21:33.083978 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:21:33.083987 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:21:33.084030 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:21:33.085341 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:21:33.085464 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:21:33.085639 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0129 04:21:36.139418 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1","csi.storage.k8s.io/pvc/name":"pvc-ncr6h","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","enableasyncattach":"false","perfProfile":"Basic","requestedsizegib":"1024","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1"} I0129 04:21:36.188294 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1338 I0129 04:21:36.188615 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:21:36.188650 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 to node k8s-agentpool-18521412-vmss000000 I0129 04:21:36.188683 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 false 0})] I0129 04:21:36.188725 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 false 0})]) I0129 04:21:36.317450 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:21:46.416711 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:21:46.416771 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:21:46.416808 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:21:46.416823 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:21:46.416868 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.228251769 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c36b4fcd-05c3-4e31-81b7-35b61cdd27e1" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:21:46.416892 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 53 lines ... I0129 04:23:12.391857 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f","csi.storage.k8s.io/pvc/name":"pvc-jcl9k","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f"} I0129 04:23:12.417401 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 04:23:12.417850 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:23:12.417886 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f to node k8s-agentpool-18521412-vmss000000 I0129 04:23:12.417925 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f false 0})] I0129 04:23:12.418087 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f false 0})]) I0129 04:23:12.578073 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:23:22.703501 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:23:22.703546 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:23:22.703568 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f attached to node k8s-agentpool-18521412-vmss000000. I0129 04:23:22.703871 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:23:22.704327 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.286419605 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:23:22.704356 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 53 lines ... I0129 04:24:21.699029 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4","csi.storage.k8s.io/pvc/name":"pvc-lgvgs","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4"} I0129 04:24:21.720044 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 04:24:21.720430 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:24:21.720467 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 to node k8s-agentpool-18521412-vmss000000 I0129 04:24:21.720503 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 false 0})] I0129 04:24:21.720548 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 false 0})]) I0129 04:24:21.919768 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:24:32.015151 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:24:32.015338 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:24:32.015481 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:24:32.015511 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:24:32.015704 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.295201552 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:24:32.015808 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 40 lines ... I0129 04:25:31.408455 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-772f8ddf-9494-4933-84ab-3a9193bb329e","csi.storage.k8s.io/pvc/name":"pvc-t7z8s","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e"} I0129 04:25:31.431226 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0129 04:25:31.431739 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-772f8ddf-9494-4933-84ab-3a9193bb329e. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:25:31.431773 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e to node k8s-agentpool-18521412-vmss000000 I0129 04:25:31.431986 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-772f8ddf-9494-4933-84ab-3a9193bb329e false 0})] I0129 04:25:31.432053 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-772f8ddf-9494-4933-84ab-3a9193bb329e false 0})]) I0129 04:25:31.558822 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-772f8ddf-9494-4933-84ab-3a9193bb329e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:25:41.673507 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:25:41.673580 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:25:41.673603 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e attached to node k8s-agentpool-18521412-vmss000000. I0129 04:25:41.673618 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:25:41.673675 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.241926399 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:25:41.673701 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 16 lines ... I0129 04:26:18.846603 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e from node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:pvc-772f8ddf-9494-4933-84ab-3a9193bb329e] E0129 04:26:18.846637 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-18521412-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:pvc-772f8ddf-9494-4933-84ab-3a9193bb329e]) not found I0129 04:26:18.846684 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:pvc-772f8ddf-9494-4933-84ab-3a9193bb329e]) I0129 04:26:23.783675 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 04:26:23.783710 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e"} I0129 04:26:23.783788 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e) I0129 04:26:23.783811 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e) since it's in attaching or detaching state I0129 04:26:23.783873 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=4.1e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e" result_code="failed_csi_driver_controller_delete_volume" E0129 04:26:23.783889 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e) since it's in attaching or detaching state I0129 04:26:24.084335 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e:pvc-772f8ddf-9494-4933-84ab-3a9193bb329e]) returned with <nil> I0129 04:26:24.084404 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:26:24.084658 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:26:24.084687 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-772f8ddf-9494-4933-84ab-3a9193bb329e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e) succeeded I0129 04:26:24.084808 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e from node k8s-agentpool-18521412-vmss000000 successfully I0129 04:26:24.084870 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.238322587 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" ... skipping 35 lines ... I0129 04:28:26.649363 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 to node k8s-agentpool-18521412-vmss000000 I0129 04:28:26.649437 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 false 0})] I0129 04:28:26.649600 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 false 0})]) I0129 04:28:26.707798 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0129 04:28:26.708174 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-60449ec0-56ed-4d48-9884-9b434c093525. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-60449ec0-56ed-4d48-9884-9b434c093525 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:28:26.708209 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-60449ec0-56ed-4d48-9884-9b434c093525 to node k8s-agentpool-18521412-vmss000000 I0129 04:28:27.274821 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:28:37.380732 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:28:37.380775 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:28:37.380836 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:28:37.380866 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:28:37.380925 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.731585663 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:28:37.380946 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0129 04:28:37.435314 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1466 I0129 04:28:37.435707 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 I0129 04:28:37.435732 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:28:37.435747 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607 is already attached to node k8s-agentpool-18521412-vmss000000 at lun 0. I0129 04:28:37.435925 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=7.39e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d010be5-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-03544c43-7cd3-48f0-b05e-7e54759a0607" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:28:37.435957 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 04:28:37.497933 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65/providers/microsoft.compute/disks/pvc-60449ec0-56ed-4d48-9884-9b434c093525:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60449ec0-56ed-4d48-9884-9b434c093525 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:28:47.574811 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:28:47.574889 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:28:47.574929 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-60449ec0-56ed-4d48-9884-9b434c093525 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:28:47.574944 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-60449ec0-56ed-4d48-9884-9b434c093525 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:28:47.575005 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=20.866811576 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-5d93755a-9f8d-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-60449ec0-56ed-4d48-9884-9b434c093525" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:28:47.575024 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 67 lines ... I0129 04:31:02.091125 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5fe821ec-1609-43b9-8411-68d429c21639","csi.storage.k8s.io/pvc/name":"pvc-bskjv","csi.storage.k8s.io/pvc/namespace":"azuredisk-1353","requestedsizegib":"10","skuName":"Premium_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639"} I0129 04:31:02.116437 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1207 I0129 04:31:02.116842 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5fe821ec-1609-43b9-8411-68d429c21639. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:31:02.116889 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639 to node k8s-agentpool-18521412-vmss000000 I0129 04:31:02.116930 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5fe821ec-1609-43b9-8411-68d429c21639 false 0})] I0129 04:31:02.117008 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5fe821ec-1609-43b9-8411-68d429c21639 false 0})]) I0129 04:31:02.297362 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5fe821ec-1609-43b9-8411-68d429c21639 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:31:37.531064 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:31:37.531111 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:31:37.531136 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:31:37.531154 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:31:37.531202 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.414371558 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5fe821ec-1609-43b9-8411-68d429c21639" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:31:37.531226 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0129 04:32:56.404517 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62","csi.storage.k8s.io/pvc/name":"pvc-ld7fb","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62"} I0129 04:32:56.424208 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 04:32:56.424492 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:32:56.424537 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 to node k8s-agentpool-18521412-vmss000000 I0129 04:32:56.424572 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 false 0})] I0129 04:32:56.424616 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 false 0})]) I0129 04:32:56.559171 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:33:06.662586 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:33:06.662660 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:33:06.662683 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:33:06.662699 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:33:06.662780 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.238265837 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:33:06.662801 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0129 04:33:20.671923 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8","csi.storage.k8s.io/pvc/name":"pvc-8xms5","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8"} I0129 04:33:20.729233 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 04:33:20.729612 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:33:20.729654 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 to node k8s-agentpool-18521412-vmss000000 I0129 04:33:20.729696 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 lun 1 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 false 1})] I0129 04:33:20.729753 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 false 1})]) I0129 04:33:20.873483 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:33:30.949721 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:33:30.949764 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:33:30.949789 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:33:30.949805 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:33:30.949850 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.220244213 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-19deb2cd-717b-4f6a-9573-b60d8c85a1d8" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:33:30.949878 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 11 lines ... I0129 04:33:42.940035 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa","csi.storage.k8s.io/pvc/name":"pvc-f8xxw","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa"} I0129 04:33:42.996659 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 04:33:42.997681 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa to node k8s-agentpool-18521412-vmss000001 (vmState Succeeded). I0129 04:33:42.997726 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa to node k8s-agentpool-18521412-vmss000001 I0129 04:33:42.997951 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa lun 0 to node k8s-agentpool-18521412-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa false 0})] I0129 04:33:42.998003 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa false 0})]) I0129 04:33:43.172057 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:33:53.299421 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000001) successfully I0129 04:33:53.299465 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000001) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:33:53.299484 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa attached to node k8s-agentpool-18521412-vmss000001. I0129 04:33:53.299496 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa to node k8s-agentpool-18521412-vmss000001 successfully I0129 04:33:53.299541 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.301860635 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5ce04b47-2926-4263-979d-b24bf5f69bfa" node="k8s-agentpool-18521412-vmss000001" result_code="succeeded" I0129 04:33:53.299556 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 71 lines ... I0129 04:37:09.090185 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 from node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62] E0129 04:37:09.090282 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-18521412-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62]) not found I0129 04:37:09.090373 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62]) I0129 04:37:13.168098 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 04:37:13.168127 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62"} I0129 04:37:13.168222 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62) I0129 04:37:13.168239 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62) since it's in attaching or detaching state I0129 04:37:13.168292 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.43e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62" result_code="failed_csi_driver_controller_delete_volume" E0129 04:37:13.168308 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62) since it's in attaching or detaching state I0129 04:37:14.317813 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62:pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62]) returned with <nil> I0129 04:37:14.317884 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:37:14.317905 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:37:14.317921 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62) succeeded I0129 04:37:14.317935 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62 from node k8s-agentpool-18521412-vmss000000 successfully I0129 04:37:14.317982 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.22795203 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2cd61ef5-1d3b-4969-aa2f-3fe2988e6a62" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" ... skipping 21 lines ... I0129 04:37:42.376370 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-oomcbqvi:{}] I0129 04:37:42.395441 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0129 04:37:42.395529 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:37:42.395561 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f to node k8s-agentpool-18521412-vmss000000 I0129 04:37:42.395604 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f false 0})] I0129 04:37:42.395639 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f false 0})]) I0129 04:37:42.535699 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:37:57.654195 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:37:57.654276 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:37:57.654298 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f attached to node k8s-agentpool-18521412-vmss000000. I0129 04:37:57.654312 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:37:57.654375 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.2779645 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09b1b8d6-265e-4386-aa6c-3f45471d8a3f" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:37:57.654463 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 58 lines ... I0129 04:40:10.517606 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1217 I0129 04:40:10.558412 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 04:40:10.562077 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8f39471a-475b-4e83-ad33-79179a8adced. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:40:10.562114 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced to node k8s-agentpool-18521412-vmss000000 I0129 04:40:10.562152 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8f39471a-475b-4e83-ad33-79179a8adced false 0})] I0129 04:40:10.562206 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8f39471a-475b-4e83-ad33-79179a8adced false 0})]) I0129 04:40:10.700662 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8f39471a-475b-4e83-ad33-79179a8adced false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:40:20.840324 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:40:20.840371 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:40:20.840393 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced attached to node k8s-agentpool-18521412-vmss000000. I0129 04:40:20.840409 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:40:20.840454 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.322513467 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:40:20.840482 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 24 lines ... I0129 04:40:49.463277 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-8f39471a-475b-4e83-ad33-79179a8adced, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced) succeeded I0129 04:40:49.463358 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced from node k8s-agentpool-18521412-vmss000000 successfully I0129 04:40:49.463511 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.362207953 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8f39471a-475b-4e83-ad33-79179a8adced" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:40:49.463541 1 utils.go:84] GRPC response: {} I0129 04:40:49.463805 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a5990421-4301-4a81-bc44-f3855db2b1e5 false 0})] I0129 04:40:49.464010 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a5990421-4301-4a81-bc44-f3855db2b1e5 false 0})]) I0129 04:40:49.621233 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a5990421-4301-4a81-bc44-f3855db2b1e5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:40:59.768904 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:40:59.768945 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:40:59.768971 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:40:59.768986 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:40:59.769031 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=19.877045725 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-a5990421-4301-4a81-bc44-f3855db2b1e5" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:40:59.769054 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 53 lines ... I0129 04:42:12.687838 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-07b0a061-405a-4b03-b3d7-9625dc947846","csi.storage.k8s.io/pvc/name":"pvc-gkbjl","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846"} I0129 04:42:12.707681 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 04:42:12.708012 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-07b0a061-405a-4b03-b3d7-9625dc947846. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:42:12.708042 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846 to node k8s-agentpool-18521412-vmss000000 I0129 04:42:12.708077 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-07b0a061-405a-4b03-b3d7-9625dc947846 false 0})] I0129 04:42:12.708113 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-07b0a061-405a-4b03-b3d7-9625dc947846 false 0})]) I0129 04:42:12.978313 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-07b0a061-405a-4b03-b3d7-9625dc947846 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:42:23.066899 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:42:23.066943 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:42:23.066970 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:42:23.066988 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:42:23.067038 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.359010885 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:42:23.067063 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 20 lines ... I0129 04:42:42.025685 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78","csi.storage.k8s.io/pvc/name":"pvc-vnj6h","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"20","resizeRequired":"true","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78"} I0129 04:42:42.044746 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1446 I0129 04:42:42.045231 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:42:42.045270 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 to node k8s-agentpool-18521412-vmss000000 I0129 04:42:42.045311 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 lun 1 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 false 1})] I0129 04:42:42.045363 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 false 1})]) I0129 04:42:42.259410 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:42:44.246546 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0129 04:42:44.246576 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846"} I0129 04:42:44.246722 1 controllerserver.go:471] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-07b0a061-405a-4b03-b3d7-9625dc947846 from node k8s-agentpool-18521412-vmss000000 I0129 04:42:57.367243 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:42:57.367285 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:42:57.367323 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f8ef8b6f-3152-47c3-b9ec-4e07f7f27f78 attached to node k8s-agentpool-18521412-vmss000000. ... skipping 104 lines ... I0129 04:43:57.159970 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 04:43:57.160387 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:43:57.160420 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09 to node k8s-agentpool-18521412-vmss000000 I0129 04:43:57.166550 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 04:43:57.166957 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6395e208-bde8-48e5-9e36-18be08df3bc5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-6395e208-bde8-48e5-9e36-18be08df3bc5 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:43:57.166989 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-6395e208-bde8-48e5-9e36-18be08df3bc5 to node k8s-agentpool-18521412-vmss000000 I0129 04:43:57.975364 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:44:08.073959 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:44:08.074005 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:44:08.074045 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a attached to node k8s-agentpool-18521412-vmss000000. I0129 04:44:08.074065 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:44:08.074115 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.931855584000001 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:44:08.074132 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0129 04:44:08.109688 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1421 I0129 04:44:08.110037 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a I0129 04:44:08.110063 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:44:08.110080 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a is already attached to node k8s-agentpool-18521412-vmss000000 at lun 0. I0129 04:44:08.110121 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=8.49e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-09a2ef5f-47db-4948-8a11-1f2f0dd4918a" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:44:08.110139 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 04:44:08.233978 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-6395e208-bde8-48e5-9e36-18be08df3bc5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6395e208-bde8-48e5-9e36-18be08df3bc5 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:44:18.339916 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:44:18.339958 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:44:18.339995 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:44:18.340015 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:44:18.340069 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.179665 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-f5c5b3e7-ceef-415e-8eaf-feda23daca09" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:44:18.340103 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-6395e208-bde8-48e5-9e36-18be08df3bc5 lun 2 to node k8s-agentpool-18521412-vmss000000, diskMap: map[] ... skipping 124 lines ... I0129 04:45:35.345438 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:45:35.345473 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 to node k8s-agentpool-18521412-vmss000000 I0129 04:45:35.345524 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:45:35.345529 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 false 0})] I0129 04:45:35.345544 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 to node k8s-agentpool-18521412-vmss000000 I0129 04:45:35.345566 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 false 0})]) I0129 04:45:35.504325 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:45:45.603332 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:45:45.603371 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:45:45.603408 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:45:45.603424 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:45:45.603468 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.258054707 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-2b572812-f703-41bd-b32c-5fbdc0c22ef1" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:45:45.603490 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 lun 1 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 false 1})] I0129 04:45:45.603483 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 04:45:45.603528 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 false 1})]) I0129 04:45:45.738719 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:45:55.822751 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:45:55.822808 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:45:55.822850 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:45:55.822865 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:45:55.822941 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=20.47737176 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:45:55.822980 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 66 lines ... I0129 04:47:38.335939 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84","csi.storage.k8s.io/pvc/name":"pvc-zzrzq","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84"} I0129 04:47:38.365707 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 04:47:38.366147 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:47:38.366195 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 to node k8s-agentpool-18521412-vmss000000 I0129 04:47:38.366231 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 false 0})] I0129 04:47:38.366275 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 false 0})]) I0129 04:47:38.525709 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:47:48.624703 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:47:48.624740 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:47:48.624762 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:47:48.624777 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:47:48.624823 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.258731172 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:47:48.624850 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 34 lines ... I0129 04:48:19.158740 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7","csi.storage.k8s.io/pvc/name":"pvc-vv9z4","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7"} I0129 04:48:19.185927 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0129 04:48:19.186209 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:48:19.186244 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 to node k8s-agentpool-18521412-vmss000000 I0129 04:48:19.186292 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 false 0})] I0129 04:48:19.186336 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 false 0})]) I0129 04:48:19.520238 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:48:29.642126 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:48:29.642167 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:48:29.642192 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:48:29.642210 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:48:29.642505 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.456043117 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-857b14c7-1ea9-42de-816d-b637ac4d09f7" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:48:29.642539 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 47 lines ... I0129 04:50:42.739468 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 04:50:42.780547 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 04:50:42.785425 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:50:42.785467 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 to node k8s-agentpool-18521412-vmss000000 I0129 04:50:42.786973 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 false 0})] I0129 04:50:42.788000 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 false 0})]) I0129 04:50:43.080390 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:50:53.265025 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:50:53.265068 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:50:53.265092 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:50:53.265109 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:50:53.265156 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.525204181 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-c79e854f-39c2-4c24-a99d-ca5231c8ab73" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:50:53.265186 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 21 lines ... I0129 04:51:27.164058 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-797ca237-420b-4598-a36c-c3ec22ad724c","csi.storage.k8s.io/pvc/name":"pvc-bthdv","csi.storage.k8s.io/pvc/namespace":"azuredisk-7726","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c"} I0129 04:51:27.187076 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0129 04:51:27.187543 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-797ca237-420b-4598-a36c-c3ec22ad724c. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c to node k8s-agentpool-18521412-vmss000001 (vmState Succeeded). I0129 04:51:27.187611 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c to node k8s-agentpool-18521412-vmss000001 I0129 04:51:27.187649 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c lun 0 to node k8s-agentpool-18521412-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-797ca237-420b-4598-a36c-c3ec22ad724c false 0})] I0129 04:51:27.187866 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-797ca237-420b-4598-a36c-c3ec22ad724c false 0})]) I0129 04:51:27.361088 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-797ca237-420b-4598-a36c-c3ec22ad724c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:51:42.528312 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000001) successfully I0129 04:51:42.528357 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000001) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:51:42.528382 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c attached to node k8s-agentpool-18521412-vmss000001. I0129 04:51:42.528399 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c to node k8s-agentpool-18521412-vmss000001 successfully I0129 04:51:42.528447 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.340905659 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-797ca237-420b-4598-a36c-c3ec22ad724c" node="k8s-agentpool-18521412-vmss000001" result_code="succeeded" I0129 04:51:42.528475 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 114 lines ... I0129 04:54:01.071969 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-468baa25-d5c8-498c-bb62-64012798d12a false 0})] I0129 04:54:01.072024 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-468baa25-d5c8-498c-bb62-64012798d12a false 0})]) I0129 04:54:01.072355 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:54:01.072382 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0 to node k8s-agentpool-18521412-vmss000000 I0129 04:54:01.072424 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8d81eeef-5ae2-4371-8b20-96f61425ae40. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8d81eeef-5ae2-4371-8b20-96f61425ae40 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:54:01.072444 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8d81eeef-5ae2-4371-8b20-96f61425ae40 to node k8s-agentpool-18521412-vmss000000 I0129 04:54:01.680007 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-468baa25-d5c8-498c-bb62-64012798d12a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:54:11.756862 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:54:11.757090 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:54:11.757146 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a attached to node k8s-agentpool-18521412-vmss000000. I0129 04:54:11.757170 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:54:11.757378 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.708104086 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:54:11.757408 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0129 04:54:11.787684 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1464 I0129 04:54:11.788108 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-468baa25-d5c8-498c-bb62-64012798d12a uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a I0129 04:54:11.788137 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:54:11.788157 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a is already attached to node k8s-agentpool-18521412-vmss000000 at lun 0. I0129 04:54:11.788206 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=9.53e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-468baa25-d5c8-498c-bb62-64012798d12a" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:54:11.788251 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0129 04:54:12.039519 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-8d81eeef-5ae2-4371-8b20-96f61425ae40:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d81eeef-5ae2-4371-8b20-96f61425ae40 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:54:22.142159 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:54:22.142198 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:54:22.142235 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:54:22.142255 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:54:22.142304 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.088036726 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8743e6c6-56fd-4eff-9ee5-2064814e80d0" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:54:22.142328 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 87 lines ... I0129 04:56:12.488498 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5235df69-f4f7-41ad-9537-83e66389c280","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-5nrqq-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280"} I0129 04:56:12.526770 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 04:56:12.527304 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5235df69-f4f7-41ad-9537-83e66389c280. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:56:12.527339 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 to node k8s-agentpool-18521412-vmss000000 I0129 04:56:12.527402 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5235df69-f4f7-41ad-9537-83e66389c280 false 0})] I0129 04:56:12.527499 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5235df69-f4f7-41ad-9537-83e66389c280 false 0})]) I0129 04:56:12.686792 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5235df69-f4f7-41ad-9537-83e66389c280 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:56:22.776452 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:56:22.776491 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:56:22.776516 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:56:22.776532 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:56:22.776577 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.249294467 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:56:22.776634 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 99 lines ... I0129 04:59:05.093143 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5235df69-f4f7-41ad-9537-83e66389c280","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-5nrqq-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280"} I0129 04:59:05.113109 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 04:59:05.113655 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5235df69-f4f7-41ad-9537-83e66389c280. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:59:05.113696 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 to node k8s-agentpool-18521412-vmss000000 I0129 04:59:05.113736 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5235df69-f4f7-41ad-9537-83e66389c280 false 0})] I0129 04:59:05.113914 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5235df69-f4f7-41ad-9537-83e66389c280 false 0})]) I0129 04:59:05.273053 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5235df69-f4f7-41ad-9537-83e66389c280 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:59:15.384933 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:59:15.384995 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:59:15.385019 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 attached to node k8s-agentpool-18521412-vmss000000. I0129 04:59:15.385035 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280 to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:59:15.385340 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.271429362 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-5235df69-f4f7-41ad-9537-83e66389c280" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:59:15.385366 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0129 04:59:41.566405 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0ba22816-2372-455c-9117-e9a0477f759d","csi.storage.k8s.io/pvc/name":"pvc-r697p","csi.storage.k8s.io/pvc/namespace":"azuredisk-4801","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com","tags":"disk=test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d"} I0129 04:59:41.592238 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0129 04:59:41.592574 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-0ba22816-2372-455c-9117-e9a0477f759d. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 04:59:41.592611 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d to node k8s-agentpool-18521412-vmss000000 I0129 04:59:41.592651 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d lun 1 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ba22816-2372-455c-9117-e9a0477f759d false 1})] I0129 04:59:41.592714 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ba22816-2372-455c-9117-e9a0477f759d false 1})]) I0129 04:59:41.749175 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0ba22816-2372-455c-9117-e9a0477f759d false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 04:59:56.878360 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 04:59:56.878408 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 04:59:56.878461 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d attached to node k8s-agentpool-18521412-vmss000000. I0129 04:59:56.878482 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d to node k8s-agentpool-18521412-vmss000000 successfully I0129 04:59:56.878529 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.285958731000001 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-0ba22816-2372-455c-9117-e9a0477f759d" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 04:59:56.878547 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 62 lines ... I0129 05:01:27.711046 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-73d2ae7f-ca53-454c-9287-7f115052c629","csi.storage.k8s.io/pvc/name":"pvc-schpg","csi.storage.k8s.io/pvc/namespace":"azuredisk-8154","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629"} I0129 05:01:27.730398 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0129 05:01:27.730900 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-73d2ae7f-ca53-454c-9287-7f115052c629. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:01:27.730934 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629 to node k8s-agentpool-18521412-vmss000000 I0129 05:01:27.731028 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-73d2ae7f-ca53-454c-9287-7f115052c629 false 0})] I0129 05:01:27.731193 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-73d2ae7f-ca53-454c-9287-7f115052c629 false 0})]) I0129 05:01:27.874031 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-73d2ae7f-ca53-454c-9287-7f115052c629 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:01:37.990177 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:01:37.990223 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:01:37.990262 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629 attached to node k8s-agentpool-18521412-vmss000000. I0129 05:01:37.990276 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629 to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:01:37.990337 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.259433809 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-73d2ae7f-ca53-454c-9287-7f115052c629" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:01:37.990369 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0129 05:03:04.759957 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-8ks8r-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1166","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a"} I0129 05:03:04.799821 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 05:03:04.800251 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:03:04.800287 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a to node k8s-agentpool-18521412-vmss000000 I0129 05:03:04.800325 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a false 0})] I0129 05:03:04.800426 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a false 0})]) I0129 05:03:04.954266 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:03:20.068463 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:03:20.068542 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:03:20.068566 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a attached to node k8s-agentpool-18521412-vmss000000. I0129 05:03:20.068779 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:03:20.068836 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.26858604 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:03:20.068864 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 27 lines ... I0129 05:04:36.069958 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-8ks8r-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1166","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a"} I0129 05:04:36.091641 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 05:04:36.092195 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:04:36.092239 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a to node k8s-agentpool-18521412-vmss000000 I0129 05:04:36.092335 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a false 0})] I0129 05:04:36.092509 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a false 0})]) I0129 05:04:36.268532 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:04:46.344642 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:04:46.344682 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:04:46.344706 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a attached to node k8s-agentpool-18521412-vmss000000. I0129 05:04:46.344723 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:04:46.344767 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.252589725 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-865fd9a4-e2dd-46dd-9ce8-f9d8cc02c20a" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:04:46.344793 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0129 05:05:02.887209 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bbaddd8c-0473-46f1-b778-38d394471620","csi.storage.k8s.io/pvc/name":"pvc-hmgst","csi.storage.k8s.io/pvc/namespace":"azuredisk-783","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620"} I0129 05:05:02.912378 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 05:05:02.912701 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-bbaddd8c-0473-46f1-b778-38d394471620. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:05:02.912811 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 to node k8s-agentpool-18521412-vmss000000 I0129 05:05:02.912934 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 lun 1 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bbaddd8c-0473-46f1-b778-38d394471620 false 1})] I0129 05:05:02.913041 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bbaddd8c-0473-46f1-b778-38d394471620 false 1})]) I0129 05:05:03.164771 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bbaddd8c-0473-46f1-b778-38d394471620 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:05:13.247348 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:05:13.247408 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:05:13.247448 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 attached to node k8s-agentpool-18521412-vmss000000. I0129 05:05:13.247465 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:05:13.247511 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.334813693 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:05:13.247527 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 35 lines ... I0129 05:06:16.486593 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bbaddd8c-0473-46f1-b778-38d394471620","csi.storage.k8s.io/pvc/name":"pvc-hmgst","csi.storage.k8s.io/pvc/namespace":"azuredisk-783","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620"} I0129 05:06:16.506859 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0129 05:06:16.507196 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-bbaddd8c-0473-46f1-b778-38d394471620. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 to node k8s-agentpool-18521412-vmss000001 (vmState Succeeded). I0129 05:06:16.507229 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 to node k8s-agentpool-18521412-vmss000001 I0129 05:06:16.507265 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 lun 0 to node k8s-agentpool-18521412-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bbaddd8c-0473-46f1-b778-38d394471620 false 0})] I0129 05:06:16.507319 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bbaddd8c-0473-46f1-b778-38d394471620 false 0})]) I0129 05:06:16.692293 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bbaddd8c-0473-46f1-b778-38d394471620 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:06:26.777088 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000001) successfully I0129 05:06:26.777132 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000001) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:06:26.777160 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 attached to node k8s-agentpool-18521412-vmss000001. I0129 05:06:26.777176 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 to node k8s-agentpool-18521412-vmss000001 successfully I0129 05:06:26.777223 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.270023108 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620" node="k8s-agentpool-18521412-vmss000001" result_code="succeeded" I0129 05:06:26.777240 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 16 lines ... I0129 05:07:21.097998 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 from node k8s-agentpool-18521412-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:pvc-bbaddd8c-0473-46f1-b778-38d394471620] E0129 05:07:21.098140 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-18521412-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:pvc-bbaddd8c-0473-46f1-b778-38d394471620]) not found I0129 05:07:21.098247 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:pvc-bbaddd8c-0473-46f1-b778-38d394471620]) I0129 05:07:21.528915 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0129 05:07:21.528941 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620"} I0129 05:07:21.529064 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620) I0129 05:07:21.529082 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620) since it's in attaching or detaching state I0129 05:07:21.529138 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.5e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620" result_code="failed_csi_driver_controller_delete_volume" E0129 05:07:21.529174 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620) since it's in attaching or detaching state I0129 05:07:26.291042 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620:pvc-bbaddd8c-0473-46f1-b778-38d394471620]) returned with <nil> I0129 05:07:26.291113 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000001) successfully I0129 05:07:26.291133 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000001) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:07:26.291146 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-bbaddd8c-0473-46f1-b778-38d394471620, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620) succeeded I0129 05:07:26.291160 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620 from node k8s-agentpool-18521412-vmss000001 successfully I0129 05:07:26.291200 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.193382153 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bbaddd8c-0473-46f1-b778-38d394471620" node="k8s-agentpool-18521412-vmss000001" result_code="succeeded" ... skipping 20 lines ... I0129 05:07:46.106301 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-936fbbee-671a-4301-81cd-dfd40113d8da","csi.storage.k8s.io/pvc/name":"pvc-66wtk","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da"} I0129 05:07:46.126994 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0129 05:07:46.127335 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-936fbbee-671a-4301-81cd-dfd40113d8da. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:07:46.127371 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da to node k8s-agentpool-18521412-vmss000000 I0129 05:07:46.127410 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da:%!s(*provider.AttachDiskOptions=&{None pvc-936fbbee-671a-4301-81cd-dfd40113d8da false 0})] I0129 05:07:46.127460 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da:%!s(*provider.AttachDiskOptions=&{None pvc-936fbbee-671a-4301-81cd-dfd40113d8da false 0})]) I0129 05:07:46.397741 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da:%!s(*provider.AttachDiskOptions=&{None pvc-936fbbee-671a-4301-81cd-dfd40113d8da false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:07:50.015077 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0129 05:07:50.015107 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-936fbbee-671a-4301-81cd-dfd40113d8da","csi.storage.k8s.io/pvc/name":"pvc-66wtk","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da"} I0129 05:07:50.036458 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1692 I0129 05:07:50.036963 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-936fbbee-671a-4301-81cd-dfd40113d8da. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da to node k8s-agentpool-18521412-vmss000001 (vmState Succeeded). I0129 05:07:50.037007 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da to node k8s-agentpool-18521412-vmss000001 I0129 05:07:50.037053 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da lun 0 to node k8s-agentpool-18521412-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da:%!s(*provider.AttachDiskOptions=&{None pvc-936fbbee-671a-4301-81cd-dfd40113d8da false 0})] I0129 05:07:50.037166 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da:%!s(*provider.AttachDiskOptions=&{None pvc-936fbbee-671a-4301-81cd-dfd40113d8da false 0})]) I0129 05:07:50.247144 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da:%!s(*provider.AttachDiskOptions=&{None pvc-936fbbee-671a-4301-81cd-dfd40113d8da false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:07:56.594428 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:07:56.594591 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:07:56.594653 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da attached to node k8s-agentpool-18521412-vmss000000. I0129 05:07:56.594715 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:07:56.594798 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.467480545 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:07:56.594859 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 81 lines ... I0129 05:09:23.080139 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-oomcbqvi:{}] I0129 05:09:23.110261 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0129 05:09:23.110385 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-e63a468a-9007-4235-97f1-f59c50bea127. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:09:23.110418 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127 to node k8s-agentpool-18521412-vmss000000 I0129 05:09:23.110493 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127 lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e63a468a-9007-4235-97f1-f59c50bea127 false 0})] I0129 05:09:23.110548 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e63a468a-9007-4235-97f1-f59c50bea127 false 0})]) I0129 05:09:23.320786 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e63a468a-9007-4235-97f1-f59c50bea127 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:09:33.415307 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:09:33.415356 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:09:33.415379 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127 attached to node k8s-agentpool-18521412-vmss000000. I0129 05:09:33.415396 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127 to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:09:33.415703 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.335285186 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:09:33.415750 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0129 05:10:30.481886 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab"} I0129 05:10:30.538396 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0129 05:10:30.539098 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:10:30.539146 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab to node k8s-agentpool-18521412-vmss000000 I0129 05:10:30.539209 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab lun 0 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab false 0})] I0129 05:10:30.539443 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab false 0})]) I0129 05:10:30.731514 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:10:40.854195 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:10:40.854234 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:10:40.854273 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab attached to node k8s-agentpool-18521412-vmss000000. I0129 05:10:40.854306 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:10:40.854351 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.315260004 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22f8fced-a6c6-48b8-9b37-e42f29b323ab" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:10:40.854369 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 19 lines ... I0129 05:10:56.993606 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0129 05:10:57.043395 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0129 05:10:57.046106 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a to node k8s-agentpool-18521412-vmss000001 (vmState Succeeded). I0129 05:10:57.046143 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a to node k8s-agentpool-18521412-vmss000001 I0129 05:10:57.046185 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a lun 0 to node k8s-agentpool-18521412-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a false 0})] I0129 05:10:57.046232 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a false 0})]) I0129 05:10:57.238210 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:11:07.363361 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000001) successfully I0129 05:11:07.363400 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000001) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:11:07.363426 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a attached to node k8s-agentpool-18521412-vmss000001. I0129 05:11:07.363443 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a to node k8s-agentpool-18521412-vmss000001 successfully I0129 05:11:07.363488 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.369312845 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a" node="k8s-agentpool-18521412-vmss000001" result_code="succeeded" I0129 05:11:07.363511 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 10 lines ... I0129 05:11:24.335220 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-18521412-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-55813ad6-5305-4faf-94ee-af7c378b8330","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330"} I0129 05:11:24.356343 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1256 I0129 05:11:24.356806 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-55813ad6-5305-4faf-94ee-af7c378b8330. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330 to node k8s-agentpool-18521412-vmss000000 (vmState Succeeded). I0129 05:11:24.356838 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330 to node k8s-agentpool-18521412-vmss000000 I0129 05:11:24.356917 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330 lun 1 to node k8s-agentpool-18521412-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-55813ad6-5305-4faf-94ee-af7c378b8330 false 1})] I0129 05:11:24.357019 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-55813ad6-5305-4faf-94ee-af7c378b8330 false 1})]) I0129 05:11:24.550293 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oomcbqvi): vm(k8s-agentpool-18521412-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oomcbqvi/providers/microsoft.compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-55813ad6-5305-4faf-94ee-af7c378b8330 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0129 05:11:39.710939 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oomcbqvi, k8s-agentpool-18521412-vmss, k8s-agentpool-18521412-vmss000000) successfully I0129 05:11:39.711016 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-18521412-vmss, kubetest-oomcbqvi, k8s-agentpool-18521412-vmss000000) for cacheKey(kubetest-oomcbqvi/k8s-agentpool-18521412-vmss) updated successfully I0129 05:11:39.711038 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330 attached to node k8s-agentpool-18521412-vmss000000. I0129 05:11:39.711054 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330 to node k8s-agentpool-18521412-vmss000000 successfully I0129 05:11:39.711104 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.354296633 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oomcbqvi" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-55813ad6-5305-4faf-94ee-af7c378b8330" node="k8s-agentpool-18521412-vmss000000" result_code="succeeded" I0129 05:11:39.711121 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 12 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 04:19:07.720284 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df e2e-test I0129 04:19:07.721047 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:19:07.755592 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:19:07.755636 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:19:07.755647 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:19:07.755696 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:19:07.757319 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:19:07.757391 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:19:07.757425 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0129 04:19:07.758096 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0129 04:19:07.758122 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0129 04:19:07.758130 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0129 04:19:07.758215 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0129 04:19:07.758268 1 azuredisk.go:192] disable UseInstanceMetadata for controller I0129 04:19:07.758278 1 azuredisk.go:204] cloud: AzurePublicCloud, location: westus2, rg: kubetest-oomcbqvi, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-18521412-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0129 04:19:07.762095 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount1985267730' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount1985267730: must be superuser to unmount. I0129 04:19:07.762129 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0129 04:19:07.762206 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0129 04:19:07.762217 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0129 04:19:07.762223 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0129 04:19:07.762229 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0129 04:19:07.762237 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 62 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 04:19:01.357892 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df e2e-test I0129 04:19:01.358699 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:19:01.389432 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:19:01.389457 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:19:01.389466 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:19:01.389490 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:19:01.390571 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:19:01.390653 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:19:01.390695 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 188 lines ... I0129 04:23:29.989027 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 04:23:30.005640 1 mount_linux.go:570] Output: "" I0129 04:23:30.005692 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/disk/azure/scsi1/lun0] I0129 04:23:30.497355 1 mount_linux.go:539] Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount I0129 04:23:30.497393 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount I0129 04:23:30.497421 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount) E0129 04:23:30.517002 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 04:23:30.517108 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 04:23:31.075441 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 04:23:31.075474 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f","csi.storage.k8s.io/pvc/name":"pvc-jcl9k","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f"} I0129 04:23:32.866025 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 04:23:32.866091 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 04:23:32.867163 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount with mount options([invalid mount options]) I0129 04:23:32.867199 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 04:23:32.880255 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0129 04:23:32.880537 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0129 04:23:32.902551 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount I0129 04:23:32.902642 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount) E0129 04:23:32.921366 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 04:23:32.921417 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 04:23:34.033458 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 04:23:34.033488 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f","csi.storage.k8s.io/pvc/name":"pvc-jcl9k","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f"} I0129 04:23:36.007239 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 04:23:36.007283 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 04:23:36.007817 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount with mount options([invalid mount options]) I0129 04:23:36.007849 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 04:23:36.020325 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0129 04:23:36.020367 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0129 04:23:36.035915 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount I0129 04:23:36.035974 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount) E0129 04:23:36.054570 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 04:23:36.054630 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 04:23:38.168700 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 04:23:38.168729 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f","csi.storage.k8s.io/pvc/name":"pvc-jcl9k","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f"} I0129 04:23:39.955741 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 04:23:39.955777 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 04:23:39.957006 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount with mount options([invalid mount options]) I0129 04:23:39.958244 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0129 04:23:39.974275 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0129 04:23:39.974310 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0129 04:23:39.990478 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount I0129 04:23:39.990714 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount) E0129 04:23:40.007643 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0129 04:23:40.007701 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ceb353b7-8171-4fad-b1db-d1c91deee44f/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0129 04:24:37.481553 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 04:24:37.481582 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4","csi.storage.k8s.io/pvc/name":"pvc-lgvgs","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4"} I0129 04:24:39.248464 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0129 04:24:39.248511 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0129 04:24:39.248526 1 utils.go:84] GRPC response: {} I0129 04:24:39.256637 1 utils.go:77] GRPC call: /csi.v1.Node/NodePublishVolume ... skipping 16 lines ... I0129 04:24:45.492060 1 utils.go:84] GRPC response: {} I0129 04:24:45.539872 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 04:24:45.539926 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4"} I0129 04:24:45.540059 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 I0129 04:24:45.540086 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 04:24:45.540127 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 I0129 04:24:45.541973 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 I0129 04:24:45.541984 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4" I0129 04:24:45.542086 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-8dc7d3be-ea0d-4f3c-9001-4d27d48355a4 successfully I0129 04:24:45.542107 1 utils.go:84] GRPC response: {} I0129 04:25:47.247741 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 04:25:47.247771 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-772f8ddf-9494-4933-84ab-3a9193bb329e","csi.storage.k8s.io/pvc/name":"pvc-t7z8s","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-f4a79b08-9f8c-11ed-b28e-027493caca65/providers/Microsoft.Compute/disks/pvc-772f8ddf-9494-4933-84ab-3a9193bb329e"} I0129 04:25:49.075095 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 648 lines ... I0129 04:46:13.336068 1 utils.go:84] GRPC response: {} I0129 04:46:13.355459 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 04:46:13.355502 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523"} I0129 04:46:13.355591 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 I0129 04:46:13.355615 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 04:46:13.355643 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 I0129 04:46:13.356841 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 I0129 04:46:13.356861 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523" I0129 04:46:13.356959 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-22ca2ae7-c113-4b9e-b4bf-ae3c8f352523 successfully I0129 04:46:13.356973 1 utils.go:84] GRPC response: {} I0129 04:47:54.185622 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 04:47:54.185654 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84","csi.storage.k8s.io/pvc/name":"pvc-zzrzq","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-bd9fd04c-9a98-4e1b-9bc0-9f6f7bf7ce84"} I0129 04:47:55.974338 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 579 lines ... I0129 05:08:44.024471 1 utils.go:84] GRPC response: {} I0129 05:08:44.077885 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 05:08:44.077911 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da"} I0129 05:08:44.077990 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da I0129 05:08:44.078020 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 05:08:44.078034 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da I0129 05:08:44.079996 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da I0129 05:08:44.080028 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da" I0129 05:08:44.080116 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da successfully I0129 05:08:44.080129 1 utils.go:84] GRPC response: {} I0129 05:09:38.959945 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 05:09:38.960003 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e63a468a-9007-4235-97f1-f59c50bea127/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e63a468a-9007-4235-97f1-f59c50bea127","csi.storage.k8s.io/pvc/name":"pvc-96f58","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-e63a468a-9007-4235-97f1-f59c50bea127"} I0129 05:09:40.780637 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 100 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 04:19:01.127071 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df e2e-test I0129 04:19:01.127714 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:19:01.176316 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:19:01.176351 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:19:01.176363 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:19:01.176427 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:19:01.177429 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:19:01.177495 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:19:01.177522 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 201 lines ... I0129 05:08:44.364668 1 utils.go:84] GRPC response: {} I0129 05:08:44.416350 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0129 05:08:44.416377 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-936fbbee-671a-4301-81cd-dfd40113d8da"} I0129 05:08:44.416527 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da I0129 05:08:44.416568 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0129 05:08:44.416593 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da I0129 05:08:44.418246 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da I0129 05:08:44.418267 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da" I0129 05:08:44.418356 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-936fbbee-671a-4301-81cd-dfd40113d8da successfully I0129 05:08:44.418381 1 utils.go:84] GRPC response: {} I0129 05:11:12.764372 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0129 05:11:12.764426 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674965945002-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oomcbqvi/providers/Microsoft.Compute/disks/pvc-1e6f5a9c-52de-430e-a5cf-f5fecab1ff1a"} I0129 05:11:14.590398 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 33 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0129 04:18:55.610320 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-db7daf80cf6d95173fec925514d6a1d9169180df e2e-test I0129 04:18:55.611183 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0129 04:18:55.665647 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0129 04:18:55.665672 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0129 04:18:55.665683 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0129 04:18:55.665715 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0129 04:18:55.666744 1 azure_auth.go:253] Using AzurePublicCloud environment I0129 04:18:55.666881 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0129 04:18:55.666931 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 629 lines ... cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="300"} 57 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="600"} 57 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="1200"} 57 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="+Inf"} 57 cloudprovider_azure_op_duration_seconds_sum{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 842.9046642569999 cloudprovider_azure_op_duration_seconds_count{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 57 # HELP cloudprovider_azure_op_failure_count [ALPHA] Number of failed Azure service operations # TYPE cloudprovider_azure_op_failure_count counter cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_delete_volume",resource_group="kubetest-oomcbqvi",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 3 # HELP disabled_metric_total [ALPHA] The count of disabled metrics. # TYPE disabled_metric_total counter disabled_metric_total 0 # HELP go_cgo_go_to_c_calls_calls_total Count of calls made from Go to C by the current process. ... skipping 67 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 40313 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 49479 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 259 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 16790 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 3655 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 272 lines ... [AfterSuite] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:165[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[1mRan 26 of 66 Specs in 3560.904 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m40 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (3560.91s) FAIL FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 3560.981s FAIL make: *** [Makefile:261: e2e-test] Error 1 2023/01/29 05:12:37 process.go:155: Step 'make e2e-test' finished in 1h1m1.280620821s 2023/01/29 05:12:37 aksengine_helpers.go:425: downloading /root/tmp3263498711/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/29 05:12:37 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/29 05:12:37 process.go:153: Running: chmod +x /root/tmp3263498711/log-dump.sh 2023/01/29 05:12:37 process.go:155: Step 'chmod +x /root/tmp3263498711/log-dump.sh' finished in 1.737479ms 2023/01/29 05:12:37 aksengine_helpers.go:425: downloading /root/tmp3263498711/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 75 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/29 05:13:16 process.go:155: Step 'bash -c /root/tmp3263498711/win-ci-logs-collector.sh kubetest-oomcbqvi.westus2.cloudapp.azure.com /root/tmp3263498711 /root/.ssh/id_rsa' finished in 3.804154ms 2023/01/29 05:13:16 aksengine.go:1141: Deleting resource group: kubetest-oomcbqvi. 2023/01/29 05:19:21 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/29 05:19:21 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/29 05:19:21 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 254.005877ms 2023/01/29 05:19:21 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 9dbc02658cf3 ... skipping 4 lines ...