Recent runs || View in Spyglass
PR | andyzhangx: fix: panic when allow-empty-cloud-config is set |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h25m |
Revision | 8dec2f5bbdde24ad0069f0302fd7c533b467f077 |
Refs |
1699 |
job-version | v1.27.0-alpha.0.1196+724497cda467b7 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.0.1196+724497cda467b7 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 107 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 184k 0 --:--:-- --:--:-- --:--:-- 187k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f not found: manifest unknown: manifest tagged by "v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=40b4dae4d1048ba3257f4c772609c4e0a0744e0f -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2023-01-23T02:35:38Z -extldflags "-static"" -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1156 lines ... #6 resolve gcr.io/k8s-staging-e2e-test-images/windows-servercore-cache:1.0-linux-amd64-20H2@sha256:b4a46ee7b1814659bb4e869935a070461d7ec60892f0d7f7ed2d78f6bda266c2 0.0s done #6 DONE 0.1s #7 [stage-1 1/3] FROM mcr.microsoft.com/windows/nanoserver:20H2@sha256:70ad3c3f156b1002a6a642d3c3b769264f9ca166f57eab62051f59c0dbe20a0f #7 resolve mcr.microsoft.com/windows/nanoserver:20H2@sha256:70ad3c3f156b1002a6a642d3c3b769264f9ca166f57eab62051f59c0dbe20a0f 0.0s done #7 sha256:b6f04fddd2b7612e474ed804bf99542bc2936fc2f7d4205a9594086216f7894a 0B / 106.29MB 0.0s #7 0.057 error: failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://centralus.data.mcr.microsoft.com/795a02ce1e3547eb8d4b8a7d06af7541-39qwoxxdjo//docker/registry/v2/blobs/sha256/b6/b6f04fddd2b7612e474ed804bf99542bc2936fc2f7d4205a9594086216f7894a/data?se=2023-01-23T03%3A03%3A33Z&sig=Ke4lxzLyngN7mNsj%2FgpNBxspZeO5sY2GNk8EKF9nDgY%3D&sp=r&spr=https&sr=b&sv=2016-05-31®id=795a02ce1e3547eb8d4b8a7d06af7541": read tcp 172.17.0.2:59778->204.79.197.219:443: read: connection reset by peer #7 0.057 retrying in 1s #7 ... #6 [core 1/1] FROM gcr.io/k8s-staging-e2e-test-images/windows-servercore-cache:1.0-linux-amd64-20H2@sha256:b4a46ee7b1814659bb4e869935a070461d7ec60892f0d7f7ed2d78f6bda266c2 #6 sha256:3b362d91b6b3adfedd5670c449cce5b07d16d65e7466922c8f70dbe4f89ad44f 48.98kB / 48.98kB 0.0s done #6 sha256:63971291fbf63f02c946782d20f285f97cd16ac03940f0f0ad8e87aedf0c3de9 62.76kB / 62.76kB 0.1s done ... skipping 601 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 865 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 216 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:47:28.557[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:47:28.558[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:47:28.618[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:47:28.618[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:47:28.676[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:47:28.676[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:47:28.732[0m Jan 23 02:47:28.732: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ppl27" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 23 02:47:28.785: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 53.704267ms Jan 23 02:47:30.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109771024s Jan 23 02:47:32.840: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10796474s Jan 23 02:47:34.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108792094s Jan 23 02:47:36.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109174899s Jan 23 02:47:38.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108927819s ... skipping 7 lines ... Jan 23 02:47:54.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 26.110572575s Jan 23 02:47:56.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 28.109882713s Jan 23 02:47:58.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 30.108810721s Jan 23 02:48:00.840: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 32.108215664s Jan 23 02:48:02.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.109892724s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:48:02.842[0m Jan 23 02:48:02.842: INFO: Pod "azuredisk-volume-tester-ppl27" satisfied condition "Succeeded or Failed" Jan 23 02:48:02.842: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-ppl27" Jan 23 02:48:02.927: INFO: Pod azuredisk-volume-tester-ppl27 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-ppl27 in namespace azuredisk-8081 [38;5;243m01/23/23 02:48:02.927[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:48:03.044[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:48:03.098[0m ... skipping 44 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:47:28.557[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:47:28.558[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:47:28.618[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:47:28.618[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:47:28.676[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:47:28.676[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:47:28.732[0m Jan 23 02:47:28.732: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ppl27" in namespace "azuredisk-8081" to be "Succeeded or Failed" Jan 23 02:47:28.785: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 53.704267ms Jan 23 02:47:30.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109771024s Jan 23 02:47:32.840: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10796474s Jan 23 02:47:34.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108792094s Jan 23 02:47:36.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109174899s Jan 23 02:47:38.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108927819s ... skipping 7 lines ... Jan 23 02:47:54.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 26.110572575s Jan 23 02:47:56.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 28.109882713s Jan 23 02:47:58.841: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 30.108810721s Jan 23 02:48:00.840: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Pending", Reason="", readiness=false. Elapsed: 32.108215664s Jan 23 02:48:02.842: INFO: Pod "azuredisk-volume-tester-ppl27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.109892724s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:48:02.842[0m Jan 23 02:48:02.842: INFO: Pod "azuredisk-volume-tester-ppl27" satisfied condition "Succeeded or Failed" Jan 23 02:48:02.842: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-ppl27" Jan 23 02:48:02.927: INFO: Pod azuredisk-volume-tester-ppl27 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-ppl27 in namespace azuredisk-8081 [38;5;243m01/23/23 02:48:02.927[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:48:03.044[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:48:03.098[0m ... skipping 39 lines ... Jan 23 02:48:46.898: INFO: PersistentVolumeClaim pvc-sk45b found but phase is Pending instead of Bound. Jan 23 02:48:48.953: INFO: PersistentVolumeClaim pvc-sk45b found and phase=Bound (4.164467819s) [1mSTEP:[0m checking the PVC [38;5;243m01/23/23 02:48:48.953[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:48:49.013[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:48:49.066[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:48:49.067[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/23/23 02:48:49.122[0m Jan 23 02:48:49.122: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-x2wzt" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 23 02:48:49.176: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 53.33445ms Jan 23 02:48:51.232: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109562308s Jan 23 02:48:53.232: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109883351s Jan 23 02:48:55.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108922249s Jan 23 02:48:57.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109014415s Jan 23 02:48:59.236: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113319292s Jan 23 02:49:01.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.108596972s Jan 23 02:49:03.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.108913961s Jan 23 02:49:05.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108687007s Jan 23 02:49:07.232: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.109572072s Jan 23 02:49:09.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.109219183s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:49:09.232[0m Jan 23 02:49:09.232: INFO: Pod "azuredisk-volume-tester-x2wzt" satisfied condition "Succeeded or Failed" Jan 23 02:49:09.232: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-x2wzt" Jan 23 02:49:09.290: INFO: Pod azuredisk-volume-tester-x2wzt has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-x2wzt in namespace azuredisk-2540 [38;5;243m01/23/23 02:49:09.29[0m Jan 23 02:49:09.352: INFO: deleting PVC "azuredisk-2540"/"pvc-sk45b" Jan 23 02:49:09.352: INFO: Deleting PersistentVolumeClaim "pvc-sk45b" ... skipping 38 lines ... Jan 23 02:48:46.898: INFO: PersistentVolumeClaim pvc-sk45b found but phase is Pending instead of Bound. Jan 23 02:48:48.953: INFO: PersistentVolumeClaim pvc-sk45b found and phase=Bound (4.164467819s) [1mSTEP:[0m checking the PVC [38;5;243m01/23/23 02:48:48.953[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:48:49.013[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:48:49.066[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:48:49.067[0m [1mSTEP:[0m checking that the pods command exits with no error [38;5;243m01/23/23 02:48:49.122[0m Jan 23 02:48:49.122: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-x2wzt" in namespace "azuredisk-2540" to be "Succeeded or Failed" Jan 23 02:48:49.176: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 53.33445ms Jan 23 02:48:51.232: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109562308s Jan 23 02:48:53.232: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109883351s Jan 23 02:48:55.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108922249s Jan 23 02:48:57.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109014415s Jan 23 02:48:59.236: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113319292s Jan 23 02:49:01.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.108596972s Jan 23 02:49:03.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.108913961s Jan 23 02:49:05.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108687007s Jan 23 02:49:07.232: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Pending", Reason="", readiness=false. Elapsed: 18.109572072s Jan 23 02:49:09.231: INFO: Pod "azuredisk-volume-tester-x2wzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.109219183s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:49:09.232[0m Jan 23 02:49:09.232: INFO: Pod "azuredisk-volume-tester-x2wzt" satisfied condition "Succeeded or Failed" Jan 23 02:49:09.232: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-x2wzt" Jan 23 02:49:09.290: INFO: Pod azuredisk-volume-tester-x2wzt has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-x2wzt in namespace azuredisk-2540 [38;5;243m01/23/23 02:49:09.29[0m Jan 23 02:49:09.352: INFO: deleting PVC "azuredisk-2540"/"pvc-sk45b" Jan 23 02:49:09.352: INFO: Deleting PersistentVolumeClaim "pvc-sk45b" ... skipping 30 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:49:50.942[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:49:50.942[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:49:51.001[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:49:51.001[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:49:51.059[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:49:51.06[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:49:51.116[0m Jan 23 02:49:51.116: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-z5zdz" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 23 02:49:51.170: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 53.946642ms Jan 23 02:49:53.229: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112823689s Jan 23 02:49:55.227: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111299431s Jan 23 02:49:57.225: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10864102s Jan 23 02:49:59.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109365803s Jan 23 02:50:01.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109421577s ... skipping 13 lines ... Jan 23 02:50:29.232: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 38.115805272s Jan 23 02:50:31.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 40.11000903s Jan 23 02:50:33.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 42.1097903s Jan 23 02:50:35.227: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 44.110558861s Jan 23 02:50:37.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.110082286s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:50:37.226[0m Jan 23 02:50:37.227: INFO: Pod "azuredisk-volume-tester-z5zdz" satisfied condition "Succeeded or Failed" Jan 23 02:50:37.227: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-z5zdz" Jan 23 02:50:37.283: INFO: Pod azuredisk-volume-tester-z5zdz has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-z5zdz in namespace azuredisk-4728 [38;5;243m01/23/23 02:50:37.283[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:50:37.402[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:50:37.457[0m ... skipping 37 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:49:50.942[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:49:50.942[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:49:51.001[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:49:51.001[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:49:51.059[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:49:51.06[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:49:51.116[0m Jan 23 02:49:51.116: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-z5zdz" in namespace "azuredisk-4728" to be "Succeeded or Failed" Jan 23 02:49:51.170: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 53.946642ms Jan 23 02:49:53.229: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112823689s Jan 23 02:49:55.227: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111299431s Jan 23 02:49:57.225: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10864102s Jan 23 02:49:59.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109365803s Jan 23 02:50:01.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109421577s ... skipping 13 lines ... Jan 23 02:50:29.232: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 38.115805272s Jan 23 02:50:31.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 40.11000903s Jan 23 02:50:33.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 42.1097903s Jan 23 02:50:35.227: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Pending", Reason="", readiness=false. Elapsed: 44.110558861s Jan 23 02:50:37.226: INFO: Pod "azuredisk-volume-tester-z5zdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.110082286s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:50:37.226[0m Jan 23 02:50:37.227: INFO: Pod "azuredisk-volume-tester-z5zdz" satisfied condition "Succeeded or Failed" Jan 23 02:50:37.227: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-z5zdz" Jan 23 02:50:37.283: INFO: Pod azuredisk-volume-tester-z5zdz has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-z5zdz in namespace azuredisk-4728 [38;5;243m01/23/23 02:50:37.283[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:50:37.402[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:50:37.457[0m ... skipping 38 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:51:39.493[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:51:39.493[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:51:39.553[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:51:39.554[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/23/23 02:51:39.609[0m Jan 23 02:52:49.722: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-bwrj5" Jan 23 02:52:49.814: INFO: Error getting logs for pod azuredisk-volume-tester-bwrj5: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-bwrj5) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-bwrj5 in namespace azuredisk-5466 [38;5;243m01/23/23 02:52:49.814[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:52:49.928[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:52:49.983[0m Jan 23 02:52:49.984: INFO: deleting PVC "azuredisk-5466"/"pvc-brz4l" Jan 23 02:52:49.984: INFO: Deleting PersistentVolumeClaim "pvc-brz4l" [1mSTEP:[0m waiting for claim's PV "pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5" to be deleted [38;5;243m01/23/23 02:52:50.039[0m ... skipping 39 lines ... [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:51:39.493[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:51:39.493[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:51:39.553[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:51:39.554[0m [1mSTEP:[0m checking that the pod has 'FailedMount' event [38;5;243m01/23/23 02:51:39.609[0m Jan 23 02:52:49.722: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-bwrj5" Jan 23 02:52:49.814: INFO: Error getting logs for pod azuredisk-volume-tester-bwrj5: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-bwrj5) [1mSTEP:[0m Deleting pod azuredisk-volume-tester-bwrj5 in namespace azuredisk-5466 [38;5;243m01/23/23 02:52:49.814[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:52:49.928[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:52:49.983[0m Jan 23 02:52:49.984: INFO: deleting PVC "azuredisk-5466"/"pvc-brz4l" Jan 23 02:52:49.984: INFO: Deleting PersistentVolumeClaim "pvc-brz4l" [1mSTEP:[0m waiting for claim's PV "pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5" to be deleted [38;5;243m01/23/23 02:52:50.039[0m ... skipping 36 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:54:07.192[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:54:07.192[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:54:07.249[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:54:07.249[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:54:07.308[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:54:07.308[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:54:07.365[0m Jan 23 02:54:07.365: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-89mzk" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 23 02:54:07.420: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 54.455733ms Jan 23 02:54:09.476: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111158994s Jan 23 02:54:11.478: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112492077s Jan 23 02:54:13.495: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129853451s Jan 23 02:54:15.475: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110150281s Jan 23 02:54:17.476: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111247599s ... skipping 27 lines ... Jan 23 02:55:13.475: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.109523693s Jan 23 02:55:15.476: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.111381936s Jan 23 02:55:17.474: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.109009305s Jan 23 02:55:19.475: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.109821312s Jan 23 02:55:21.474: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.109443103s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:55:21.475[0m Jan 23 02:55:21.475: INFO: Pod "azuredisk-volume-tester-89mzk" satisfied condition "Succeeded or Failed" Jan 23 02:55:21.475: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-89mzk" Jan 23 02:55:21.559: INFO: Pod azuredisk-volume-tester-89mzk has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-89mzk in namespace azuredisk-2790 [38;5;243m01/23/23 02:55:21.559[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:55:21.676[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:55:21.73[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:54:07.192[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:54:07.192[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:54:07.249[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:54:07.249[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 02:54:07.308[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:54:07.308[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:54:07.365[0m Jan 23 02:54:07.365: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-89mzk" in namespace "azuredisk-2790" to be "Succeeded or Failed" Jan 23 02:54:07.420: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 54.455733ms Jan 23 02:54:09.476: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111158994s Jan 23 02:54:11.478: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112492077s Jan 23 02:54:13.495: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129853451s Jan 23 02:54:15.475: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110150281s Jan 23 02:54:17.476: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111247599s ... skipping 27 lines ... Jan 23 02:55:13.475: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.109523693s Jan 23 02:55:15.476: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.111381936s Jan 23 02:55:17.474: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.109009305s Jan 23 02:55:19.475: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.109821312s Jan 23 02:55:21.474: INFO: Pod "azuredisk-volume-tester-89mzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.109443103s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:55:21.475[0m Jan 23 02:55:21.475: INFO: Pod "azuredisk-volume-tester-89mzk" satisfied condition "Succeeded or Failed" Jan 23 02:55:21.475: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-89mzk" Jan 23 02:55:21.559: INFO: Pod azuredisk-volume-tester-89mzk has the following logs: e2e-test [1mSTEP:[0m Deleting pod azuredisk-volume-tester-89mzk in namespace azuredisk-2790 [38;5;243m01/23/23 02:55:21.559[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:55:21.676[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:55:21.73[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 02:56:04.913[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:56:04.913[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:56:04.913[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:56:04.969[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:56:04.969[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:56:05.03[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:56:05.087[0m Jan 23 02:56:05.087: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hbvhc" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 23 02:56:05.146: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.965737ms Jan 23 02:56:07.202: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115383549s Jan 23 02:56:09.201: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114329407s Jan 23 02:56:11.201: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11366556s Jan 23 02:56:13.202: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114926485s Jan 23 02:56:15.206: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118691721s ... skipping 26 lines ... Jan 23 02:57:09.201: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.11421738s Jan 23 02:57:11.200: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.1130964s Jan 23 02:57:13.200: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113216058s Jan 23 02:57:15.203: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.115807912s Jan 23 02:57:17.202: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.115330191s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:57:17.202[0m Jan 23 02:57:17.202: INFO: Pod "azuredisk-volume-tester-hbvhc" satisfied condition "Succeeded or Failed" Jan 23 02:57:17.202: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-hbvhc" Jan 23 02:57:17.279: INFO: Pod azuredisk-volume-tester-hbvhc has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hbvhc in namespace azuredisk-5356 [38;5;243m01/23/23 02:57:17.279[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:57:17.397[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:57:17.452[0m ... skipping 37 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 02:56:04.913[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:56:04.913[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:56:04.913[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:56:04.969[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:56:04.969[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:56:05.03[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:56:05.087[0m Jan 23 02:56:05.087: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hbvhc" in namespace "azuredisk-5356" to be "Succeeded or Failed" Jan 23 02:56:05.146: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.965737ms Jan 23 02:56:07.202: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115383549s Jan 23 02:56:09.201: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114329407s Jan 23 02:56:11.201: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11366556s Jan 23 02:56:13.202: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114926485s Jan 23 02:56:15.206: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118691721s ... skipping 26 lines ... Jan 23 02:57:09.201: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.11421738s Jan 23 02:57:11.200: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.1130964s Jan 23 02:57:13.200: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113216058s Jan 23 02:57:15.203: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.115807912s Jan 23 02:57:17.202: INFO: Pod "azuredisk-volume-tester-hbvhc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.115330191s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:57:17.202[0m Jan 23 02:57:17.202: INFO: Pod "azuredisk-volume-tester-hbvhc" satisfied condition "Succeeded or Failed" Jan 23 02:57:17.202: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-hbvhc" Jan 23 02:57:17.279: INFO: Pod azuredisk-volume-tester-hbvhc has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-hbvhc in namespace azuredisk-5356 [38;5;243m01/23/23 02:57:17.279[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:57:17.397[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 02:57:17.452[0m ... skipping 44 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 02:58:17.49[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:58:17.49[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:58:17.49[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:58:17.552[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:58:17.552[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:58:17.608[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:58:17.664[0m Jan 23 02:58:17.664: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zqmcp" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 23 02:58:17.717: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 53.712287ms Jan 23 02:58:19.774: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109953821s Jan 23 02:58:21.774: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11024075s Jan 23 02:58:23.772: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108800583s Jan 23 02:58:25.774: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110184014s Jan 23 02:58:27.773: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109620738s ... skipping 25 lines ... Jan 23 02:59:19.773: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.109297283s Jan 23 02:59:21.772: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.108373278s Jan 23 02:59:23.773: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.10936479s Jan 23 02:59:25.772: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.108327865s Jan 23 02:59:27.779: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.115639883s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:59:27.779[0m Jan 23 02:59:27.780: INFO: Pod "azuredisk-volume-tester-zqmcp" satisfied condition "Succeeded or Failed" Jan 23 02:59:27.780: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-zqmcp" Jan 23 02:59:27.859: INFO: Pod azuredisk-volume-tester-zqmcp has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-zqmcp in namespace azuredisk-5194 [38;5;243m01/23/23 02:59:27.859[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:59:27.978[0m ... skipping 57 lines ... [1mSTEP:[0m creating volume in external rg azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 02:58:17.49[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 02:58:17.49[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 02:58:17.49[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 02:58:17.552[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 02:58:17.552[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 02:58:17.608[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 02:58:17.664[0m Jan 23 02:58:17.664: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zqmcp" in namespace "azuredisk-5194" to be "Succeeded or Failed" Jan 23 02:58:17.717: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 53.712287ms Jan 23 02:58:19.774: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109953821s Jan 23 02:58:21.774: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11024075s Jan 23 02:58:23.772: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108800583s Jan 23 02:58:25.774: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110184014s Jan 23 02:58:27.773: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.109620738s ... skipping 25 lines ... Jan 23 02:59:19.773: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.109297283s Jan 23 02:59:21.772: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.108373278s Jan 23 02:59:23.773: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.10936479s Jan 23 02:59:25.772: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.108327865s Jan 23 02:59:27.779: INFO: Pod "azuredisk-volume-tester-zqmcp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.115639883s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 02:59:27.779[0m Jan 23 02:59:27.780: INFO: Pod "azuredisk-volume-tester-zqmcp" satisfied condition "Succeeded or Failed" Jan 23 02:59:27.780: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-zqmcp" Jan 23 02:59:27.859: INFO: Pod azuredisk-volume-tester-zqmcp has the following logs: hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-zqmcp in namespace azuredisk-5194 [38;5;243m01/23/23 02:59:27.859[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 02:59:27.978[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:01:54.948[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:01:54.948[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:01:55.007[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:01:55.007[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:01:55.067[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:01:55.067[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/23/23 03:01:55.124[0m Jan 23 03:01:55.124: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zccbc" in namespace "azuredisk-1353" to be "Error status code" Jan 23 03:01:55.181: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.489106ms Jan 23 03:01:57.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113587331s Jan 23 03:01:59.248: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123448625s Jan 23 03:02:01.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113535233s Jan 23 03:02:03.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114024309s Jan 23 03:02:05.241: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11665186s ... skipping 24 lines ... Jan 23 03:02:55.240: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.11534298s Jan 23 03:02:57.240: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.115640911s Jan 23 03:02:59.239: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.114561322s Jan 23 03:03:01.239: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.114326403s Jan 23 03:03:03.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113954843s Jan 23 03:03:05.240: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.115076507s Jan 23 03:03:07.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.113942593s [1mSTEP:[0m Saw pod failure [38;5;243m01/23/23 03:03:07.239[0m Jan 23 03:03:07.239: INFO: Pod "azuredisk-volume-tester-zccbc" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/23/23 03:03:07.239[0m Jan 23 03:03:07.327: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-zccbc" Jan 23 03:03:07.386: INFO: Pod azuredisk-volume-tester-zccbc has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-zccbc in namespace azuredisk-1353 [38;5;243m01/23/23 03:03:07.386[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:03:07.515[0m ... skipping 34 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:01:54.948[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:01:54.948[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:01:55.007[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:01:55.007[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:01:55.067[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:01:55.067[0m [1mSTEP:[0m checking that the pod's command exits with an error [38;5;243m01/23/23 03:01:55.124[0m Jan 23 03:01:55.124: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zccbc" in namespace "azuredisk-1353" to be "Error status code" Jan 23 03:01:55.181: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.489106ms Jan 23 03:01:57.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113587331s Jan 23 03:01:59.248: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123448625s Jan 23 03:02:01.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113535233s Jan 23 03:02:03.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114024309s Jan 23 03:02:05.241: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11665186s ... skipping 24 lines ... Jan 23 03:02:55.240: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.11534298s Jan 23 03:02:57.240: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.115640911s Jan 23 03:02:59.239: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.114561322s Jan 23 03:03:01.239: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.114326403s Jan 23 03:03:03.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113954843s Jan 23 03:03:05.240: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.115076507s Jan 23 03:03:07.238: INFO: Pod "azuredisk-volume-tester-zccbc": Phase="Failed", Reason="", readiness=false. Elapsed: 1m12.113942593s [1mSTEP:[0m Saw pod failure [38;5;243m01/23/23 03:03:07.239[0m Jan 23 03:03:07.239: INFO: Pod "azuredisk-volume-tester-zccbc" satisfied condition "Error status code" [1mSTEP:[0m checking that pod logs contain expected message [38;5;243m01/23/23 03:03:07.239[0m Jan 23 03:03:07.327: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-zccbc" Jan 23 03:03:07.386: INFO: Pod azuredisk-volume-tester-zccbc has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP:[0m Deleting pod azuredisk-volume-tester-zccbc in namespace azuredisk-1353 [38;5;243m01/23/23 03:03:07.386[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:03:07.515[0m ... skipping 707 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:11:56.252[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:11:56.252[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:11:56.312[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:11:56.312[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:11:56.372[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:11:56.372[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:11:56.43[0m Jan 23 03:11:56.430: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dr2mn" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 23 03:11:56.486: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 55.690435ms Jan 23 03:11:58.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11296681s Jan 23 03:12:00.544: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113493857s Jan 23 03:12:02.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113056521s Jan 23 03:12:04.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112595613s Jan 23 03:12:06.546: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115945068s ... skipping 3 lines ... Jan 23 03:12:14.545: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114432052s Jan 23 03:12:16.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11340479s Jan 23 03:12:18.544: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114371768s Jan 23 03:12:20.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.1129858s Jan 23 03:12:22.544: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.114213082s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:12:22.544[0m Jan 23 03:12:22.545: INFO: Pod "azuredisk-volume-tester-dr2mn" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/23/23 03:12:22.545[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/23/23 03:12:27.545[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:12:27.658[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:12:27.659[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:12:27.72[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/23/23 03:12:27.72[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:12:27.778[0m Jan 23 03:12:27.778: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-4649f" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 23 03:12:27.835: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.680566ms Jan 23 03:12:29.892: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113970158s Jan 23 03:12:31.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115963393s Jan 23 03:12:33.893: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114868595s Jan 23 03:12:35.893: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114403174s Jan 23 03:12:37.892: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113756559s ... skipping 2 lines ... Jan 23 03:12:43.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115461486s Jan 23 03:12:45.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115568817s Jan 23 03:12:47.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115423785s Jan 23 03:12:49.893: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114314799s Jan 23 03:12:51.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115958413s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:12:51.894[0m Jan 23 03:12:51.894: INFO: Pod "azuredisk-volume-tester-4649f" satisfied condition "Succeeded or Failed" Jan 23 03:12:51.894: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-4649f" Jan 23 03:12:51.980: INFO: Pod azuredisk-volume-tester-4649f has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-4649f in namespace azuredisk-59 [38;5;243m01/23/23 03:12:51.98[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:12:52.103[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:12:52.16[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:11:56.252[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:11:56.252[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:11:56.312[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:11:56.312[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:11:56.372[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:11:56.372[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:11:56.43[0m Jan 23 03:11:56.430: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dr2mn" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 23 03:11:56.486: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 55.690435ms Jan 23 03:11:58.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11296681s Jan 23 03:12:00.544: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113493857s Jan 23 03:12:02.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113056521s Jan 23 03:12:04.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112595613s Jan 23 03:12:06.546: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115945068s ... skipping 3 lines ... Jan 23 03:12:14.545: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.114432052s Jan 23 03:12:16.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.11340479s Jan 23 03:12:18.544: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114371768s Jan 23 03:12:20.543: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.1129858s Jan 23 03:12:22.544: INFO: Pod "azuredisk-volume-tester-dr2mn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.114213082s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:12:22.544[0m Jan 23 03:12:22.545: INFO: Pod "azuredisk-volume-tester-dr2mn" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/23/23 03:12:22.545[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/23/23 03:12:27.545[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:12:27.658[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:12:27.659[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:12:27.72[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/23/23 03:12:27.72[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:12:27.778[0m Jan 23 03:12:27.778: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-4649f" in namespace "azuredisk-59" to be "Succeeded or Failed" Jan 23 03:12:27.835: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.680566ms Jan 23 03:12:29.892: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113970158s Jan 23 03:12:31.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115963393s Jan 23 03:12:33.893: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114868595s Jan 23 03:12:35.893: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114403174s Jan 23 03:12:37.892: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113756559s ... skipping 2 lines ... Jan 23 03:12:43.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.115461486s Jan 23 03:12:45.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115568817s Jan 23 03:12:47.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.115423785s Jan 23 03:12:49.893: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114314799s Jan 23 03:12:51.894: INFO: Pod "azuredisk-volume-tester-4649f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115958413s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:12:51.894[0m Jan 23 03:12:51.894: INFO: Pod "azuredisk-volume-tester-4649f" satisfied condition "Succeeded or Failed" Jan 23 03:12:51.894: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-4649f" Jan 23 03:12:51.980: INFO: Pod azuredisk-volume-tester-4649f has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-4649f in namespace azuredisk-59 [38;5;243m01/23/23 03:12:51.98[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:12:52.103[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:12:52.16[0m ... skipping 46 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:13:44.381[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:13:44.381[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:13:44.441[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:13:44.441[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:13:44.506[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:13:44.506[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:13:44.564[0m Jan 23 03:13:44.564: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9pvkx" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 23 03:13:44.628: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 63.387391ms Jan 23 03:13:46.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121336406s Jan 23 03:13:48.687: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122496485s Jan 23 03:13:50.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121131427s Jan 23 03:13:52.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121574691s Jan 23 03:13:54.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12131096s ... skipping 62 lines ... Jan 23 03:16:00.688: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.123584341s Jan 23 03:16:02.689: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.124926153s Jan 23 03:16:04.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.121824287s Jan 23 03:16:06.687: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.122885324s Jan 23 03:16:08.687: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m24.122767221s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:16:08.688[0m Jan 23 03:16:08.688: INFO: Pod "azuredisk-volume-tester-9pvkx" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/23/23 03:16:08.688[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/23/23 03:16:13.688[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:16:13.82[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:16:13.82[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:16:13.885[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/23/23 03:16:13.885[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:16:13.945[0m Jan 23 03:16:13.945: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gk7gb" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 23 03:16:14.002: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 57.017194ms Jan 23 03:16:16.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115854404s Jan 23 03:16:18.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116259101s Jan 23 03:16:20.060: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115401311s Jan 23 03:16:22.063: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117932383s Jan 23 03:16:24.060: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11554078s ... skipping 9 lines ... Jan 23 03:16:44.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.115979096s Jan 23 03:16:46.067: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.121887702s Jan 23 03:16:48.062: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.116889702s Jan 23 03:16:50.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115621147s Jan 23 03:16:52.060: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.115294572s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:16:52.06[0m Jan 23 03:16:52.060: INFO: Pod "azuredisk-volume-tester-gk7gb" satisfied condition "Succeeded or Failed" Jan 23 03:16:52.060: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-gk7gb" Jan 23 03:16:52.149: INFO: Pod azuredisk-volume-tester-gk7gb has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gk7gb in namespace azuredisk-2546 [38;5;243m01/23/23 03:16:52.149[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:16:52.275[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:16:52.333[0m ... skipping 47 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:13:44.381[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:13:44.381[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:13:44.441[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:13:44.441[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:13:44.506[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:13:44.506[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:13:44.564[0m Jan 23 03:13:44.564: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9pvkx" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 23 03:13:44.628: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 63.387391ms Jan 23 03:13:46.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121336406s Jan 23 03:13:48.687: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122496485s Jan 23 03:13:50.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121131427s Jan 23 03:13:52.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121574691s Jan 23 03:13:54.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12131096s ... skipping 62 lines ... Jan 23 03:16:00.688: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.123584341s Jan 23 03:16:02.689: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.124926153s Jan 23 03:16:04.686: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.121824287s Jan 23 03:16:06.687: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.122885324s Jan 23 03:16:08.687: INFO: Pod "azuredisk-volume-tester-9pvkx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m24.122767221s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:16:08.688[0m Jan 23 03:16:08.688: INFO: Pod "azuredisk-volume-tester-9pvkx" satisfied condition "Succeeded or Failed" [1mSTEP:[0m sleep 5s and then clone volume [38;5;243m01/23/23 03:16:08.688[0m [1mSTEP:[0m cloning existing volume [38;5;243m01/23/23 03:16:13.688[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:16:13.82[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:16:13.82[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:16:13.885[0m [1mSTEP:[0m deploying a second pod with cloned volume [38;5;243m01/23/23 03:16:13.885[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:16:13.945[0m Jan 23 03:16:13.945: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gk7gb" in namespace "azuredisk-2546" to be "Succeeded or Failed" Jan 23 03:16:14.002: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 57.017194ms Jan 23 03:16:16.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115854404s Jan 23 03:16:18.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116259101s Jan 23 03:16:20.060: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115401311s Jan 23 03:16:22.063: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117932383s Jan 23 03:16:24.060: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11554078s ... skipping 9 lines ... Jan 23 03:16:44.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.115979096s Jan 23 03:16:46.067: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.121887702s Jan 23 03:16:48.062: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.116889702s Jan 23 03:16:50.061: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115621147s Jan 23 03:16:52.060: INFO: Pod "azuredisk-volume-tester-gk7gb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.115294572s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:16:52.06[0m Jan 23 03:16:52.060: INFO: Pod "azuredisk-volume-tester-gk7gb" satisfied condition "Succeeded or Failed" Jan 23 03:16:52.060: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-gk7gb" Jan 23 03:16:52.149: INFO: Pod azuredisk-volume-tester-gk7gb has the following logs: 20.0G [1mSTEP:[0m Deleting pod azuredisk-volume-tester-gk7gb in namespace azuredisk-2546 [38;5;243m01/23/23 03:16:52.149[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:16:52.275[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:16:52.333[0m ... skipping 56 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:17:44.853[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:17:44.853[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:17:44.911[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:17:44.911[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:17:44.97[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:17:44.971[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:17:45.037[0m Jan 23 03:17:45.037: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g6lt5" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 23 03:17:45.110: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 73.061388ms Jan 23 03:17:47.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132085541s Jan 23 03:17:49.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132128884s Jan 23 03:17:51.168: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130967032s Jan 23 03:17:53.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132733627s Jan 23 03:17:55.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.131818381s ... skipping 9 lines ... Jan 23 03:18:15.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.132187948s Jan 23 03:18:17.168: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.131438957s Jan 23 03:18:19.168: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.131157876s Jan 23 03:18:21.167: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.130603349s Jan 23 03:18:23.170: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.133318331s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:18:23.17[0m Jan 23 03:18:23.170: INFO: Pod "azuredisk-volume-tester-g6lt5" satisfied condition "Succeeded or Failed" Jan 23 03:18:23.170: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-g6lt5" Jan 23 03:18:23.231: INFO: Pod azuredisk-volume-tester-g6lt5 has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-g6lt5 in namespace azuredisk-1598 [38;5;243m01/23/23 03:18:23.231[0m ... skipping 75 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:17:44.853[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:17:44.853[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:17:44.911[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:17:44.911[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:17:44.97[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:17:44.971[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:17:45.037[0m Jan 23 03:17:45.037: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g6lt5" in namespace "azuredisk-1598" to be "Succeeded or Failed" Jan 23 03:17:45.110: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 73.061388ms Jan 23 03:17:47.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132085541s Jan 23 03:17:49.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132128884s Jan 23 03:17:51.168: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130967032s Jan 23 03:17:53.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132733627s Jan 23 03:17:55.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.131818381s ... skipping 9 lines ... Jan 23 03:18:15.169: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.132187948s Jan 23 03:18:17.168: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.131438957s Jan 23 03:18:19.168: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.131157876s Jan 23 03:18:21.167: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.130603349s Jan 23 03:18:23.170: INFO: Pod "azuredisk-volume-tester-g6lt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.133318331s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:18:23.17[0m Jan 23 03:18:23.170: INFO: Pod "azuredisk-volume-tester-g6lt5" satisfied condition "Succeeded or Failed" Jan 23 03:18:23.170: INFO: deleting Pod "azuredisk-1598"/"azuredisk-volume-tester-g6lt5" Jan 23 03:18:23.231: INFO: Pod azuredisk-volume-tester-g6lt5 has the following logs: hello world hello world hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-g6lt5 in namespace azuredisk-1598 [38;5;243m01/23/23 03:18:23.231[0m ... skipping 69 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:19:56.532[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:19:56.532[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:19:56.589[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:19:56.59[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:19:56.647[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:19:56.647[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:19:56.705[0m Jan 23 03:19:56.706: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2mjpg" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 23 03:19:56.762: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 56.917276ms Jan 23 03:19:58.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116535931s Jan 23 03:20:00.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116446062s Jan 23 03:20:02.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116177794s Jan 23 03:20:04.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116432639s Jan 23 03:20:06.821: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115931644s ... skipping 25 lines ... Jan 23 03:20:58.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.117115658s Jan 23 03:21:00.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.11723196s Jan 23 03:21:02.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.11635007s Jan 23 03:21:04.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117077451s Jan 23 03:21:06.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.117761744s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:21:06.823[0m Jan 23 03:21:06.823: INFO: Pod "azuredisk-volume-tester-2mjpg" satisfied condition "Succeeded or Failed" Jan 23 03:21:06.823: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-2mjpg" Jan 23 03:21:06.910: INFO: Pod azuredisk-volume-tester-2mjpg has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.063273 seconds, 1.5GB/s hello world ... skipping 53 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:19:56.532[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:19:56.532[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:19:56.589[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:19:56.59[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:19:56.647[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:19:56.647[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:19:56.705[0m Jan 23 03:19:56.706: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2mjpg" in namespace "azuredisk-3410" to be "Succeeded or Failed" Jan 23 03:19:56.762: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 56.917276ms Jan 23 03:19:58.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116535931s Jan 23 03:20:00.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116446062s Jan 23 03:20:02.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116177794s Jan 23 03:20:04.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116432639s Jan 23 03:20:06.821: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115931644s ... skipping 25 lines ... Jan 23 03:20:58.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.117115658s Jan 23 03:21:00.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.11723196s Jan 23 03:21:02.822: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.11635007s Jan 23 03:21:04.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117077451s Jan 23 03:21:06.823: INFO: Pod "azuredisk-volume-tester-2mjpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.117761744s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:21:06.823[0m Jan 23 03:21:06.823: INFO: Pod "azuredisk-volume-tester-2mjpg" satisfied condition "Succeeded or Failed" Jan 23 03:21:06.823: INFO: deleting Pod "azuredisk-3410"/"azuredisk-volume-tester-2mjpg" Jan 23 03:21:06.910: INFO: Pod azuredisk-volume-tester-2mjpg has the following logs: 100+0 records in 100+0 records out 104857600 bytes (100.0MB) copied, 0.063273 seconds, 1.5GB/s hello world ... skipping 46 lines ... Jan 23 03:21:59.225: INFO: >>> kubeConfig: /root/tmp1802577493/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:21:59.226[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:21:59.226[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:21:59.285[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:21:59.285[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:21:59.346[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:21:59.404[0m Jan 23 03:21:59.404: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hm5mj" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 23 03:21:59.465: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 60.972042ms Jan 23 03:22:01.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119451885s Jan 23 03:22:03.528: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123471804s Jan 23 03:22:05.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119473949s Jan 23 03:22:07.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120116632s Jan 23 03:22:09.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120200099s ... skipping 26 lines ... Jan 23 03:23:03.529: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.124533861s Jan 23 03:23:05.525: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.12081497s Jan 23 03:23:07.523: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.118711976s Jan 23 03:23:09.525: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.120744964s Jan 23 03:23:11.528: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.124276648s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:23:11.529[0m Jan 23 03:23:11.529: INFO: Pod "azuredisk-volume-tester-hm5mj" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/23/23 03:23:11.529[0m 2023/01/23 03:23:11 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/23 03:23:11 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-oduib2ov [38;5;243m01/23/23 03:23:11.53[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-43d42490-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:23:11.53[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-43d42490-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:23:13.119[0m ... skipping 5 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:23:28.312[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:23:28.313[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:23:28.371[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:23:28.371[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:23:28.429[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/23/23 03:23:28.429[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:23:28.49[0m Jan 23 03:23:28.491: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nkq76" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 23 03:23:28.548: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 56.914238ms Jan 23 03:23:30.606: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115547835s Jan 23 03:23:32.611: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120747173s Jan 23 03:23:34.606: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115544875s Jan 23 03:23:36.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116088937s Jan 23 03:23:38.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116665111s Jan 23 03:23:40.610: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119552353s Jan 23 03:23:42.612: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 14.12157204s Jan 23 03:23:44.609: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118041372s Jan 23 03:23:46.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115987239s Jan 23 03:23:48.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116379751s Jan 23 03:23:50.605: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.114647524s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:23:50.605[0m Jan 23 03:23:50.605: INFO: Pod "azuredisk-volume-tester-nkq76" satisfied condition "Succeeded or Failed" Jan 23 03:23:50.605: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-nkq76" Jan 23 03:23:50.696: INFO: Pod azuredisk-volume-tester-nkq76 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-nkq76 in namespace azuredisk-8582 [38;5;243m01/23/23 03:23:50.696[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:23:50.817[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:23:50.875[0m ... skipping 46 lines ... Jan 23 03:21:59.225: INFO: >>> kubeConfig: /root/tmp1802577493/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:21:59.226[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:21:59.226[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:21:59.285[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:21:59.285[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:21:59.346[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:21:59.404[0m Jan 23 03:21:59.404: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hm5mj" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 23 03:21:59.465: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 60.972042ms Jan 23 03:22:01.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119451885s Jan 23 03:22:03.528: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123471804s Jan 23 03:22:05.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119473949s Jan 23 03:22:07.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120116632s Jan 23 03:22:09.524: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120200099s ... skipping 26 lines ... Jan 23 03:23:03.529: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.124533861s Jan 23 03:23:05.525: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.12081497s Jan 23 03:23:07.523: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.118711976s Jan 23 03:23:09.525: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.120744964s Jan 23 03:23:11.528: INFO: Pod "azuredisk-volume-tester-hm5mj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.124276648s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:23:11.529[0m Jan 23 03:23:11.529: INFO: Pod "azuredisk-volume-tester-hm5mj" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/23/23 03:23:11.529[0m [1mSTEP:[0m Prow test resource group: kubetest-oduib2ov [38;5;243m01/23/23 03:23:11.53[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-43d42490-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:23:11.53[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-43d42490-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:23:13.119[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/23/23 03:23:13.119[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/23/23 03:23:13.12[0m ... skipping 3 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:23:28.312[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:23:28.313[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:23:28.371[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:23:28.371[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:23:28.429[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/23/23 03:23:28.429[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:23:28.49[0m Jan 23 03:23:28.491: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nkq76" in namespace "azuredisk-8582" to be "Succeeded or Failed" Jan 23 03:23:28.548: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 56.914238ms Jan 23 03:23:30.606: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115547835s Jan 23 03:23:32.611: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120747173s Jan 23 03:23:34.606: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115544875s Jan 23 03:23:36.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116088937s Jan 23 03:23:38.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116665111s Jan 23 03:23:40.610: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119552353s Jan 23 03:23:42.612: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 14.12157204s Jan 23 03:23:44.609: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 16.118041372s Jan 23 03:23:46.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115987239s Jan 23 03:23:48.607: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116379751s Jan 23 03:23:50.605: INFO: Pod "azuredisk-volume-tester-nkq76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.114647524s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:23:50.605[0m Jan 23 03:23:50.605: INFO: Pod "azuredisk-volume-tester-nkq76" satisfied condition "Succeeded or Failed" Jan 23 03:23:50.605: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-nkq76" Jan 23 03:23:50.696: INFO: Pod azuredisk-volume-tester-nkq76 has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-nkq76 in namespace azuredisk-8582 [38;5;243m01/23/23 03:23:50.696[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:23:50.817[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:23:50.875[0m ... skipping 45 lines ... Jan 23 03:25:39.542: INFO: >>> kubeConfig: /root/tmp1802577493/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:25:39.543[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:25:39.543[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:25:39.602[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:25:39.602[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:25:39.662[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:25:39.72[0m Jan 23 03:25:39.721: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vq8mp" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 23 03:25:39.777: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 56.66343ms Jan 23 03:25:41.837: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116511386s Jan 23 03:25:43.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115887894s Jan 23 03:25:45.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115527014s Jan 23 03:25:47.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115255584s Jan 23 03:25:49.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115622186s ... skipping 2 lines ... Jan 23 03:25:55.838: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117319484s Jan 23 03:25:57.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115226676s Jan 23 03:25:59.835: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.1144377s Jan 23 03:26:01.840: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119190079s Jan 23 03:26:03.843: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.122632139s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:26:03.843[0m Jan 23 03:26:03.843: INFO: Pod "azuredisk-volume-tester-vq8mp" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/23/23 03:26:03.844[0m 2023/01/23 03:26:03 Running in Prow, converting AZURE_CREDENTIALS to AZURE_CREDENTIAL_FILE 2023/01/23 03:26:03 Reading credentials file /etc/azure-cred/credentials [1mSTEP:[0m Prow test resource group: kubetest-oduib2ov [38;5;243m01/23/23 03:26:03.844[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-aa893ef9-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:26:03.844[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-aa893ef9-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:26:04.639[0m ... skipping 12 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:26:21.996[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:26:22.058[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:26:22.058[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:26:22.12[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/23/23 03:26:22.12[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/23/23 03:26:22.12[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:26:22.18[0m Jan 23 03:26:22.180: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-94wh8" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 23 03:26:22.241: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 60.347946ms Jan 23 03:26:24.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120702964s Jan 23 03:26:26.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120299213s Jan 23 03:26:28.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11913498s Jan 23 03:26:30.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120948959s Jan 23 03:26:32.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120871473s Jan 23 03:26:34.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.120141839s Jan 23 03:26:36.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.119670118s Jan 23 03:26:38.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119438169s Jan 23 03:26:40.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120922639s Jan 23 03:26:42.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.119773533s Jan 23 03:26:44.299: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Running", Reason="", readiness=true. Elapsed: 22.118439105s Jan 23 03:26:46.299: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Failed", Reason="", readiness=false. Elapsed: 24.118871513s Jan 23 03:26:46.300: INFO: Unexpected error: <*fmt.wrapError | 0xc0009a87a0>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-94wh8 to be Succeeded or Failed: pod \"azuredisk-volume-tester-94wh8\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc00052b830>{ s: "pod \"azuredisk-volume-tester-94wh8\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 23 03:26:46.300: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-94wh8 to be Succeeded or Failed: pod "azuredisk-volume-tester-94wh8" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x22517b7?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000c35d78, {0x270bd00, 0xc000294b60}, {0x26f6f00, 0xc00079fd60}, 0xc000c22b00?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 42 lines ... Jan 23 03:28:54.367: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7726 to be removed Jan 23 03:28:54.424: INFO: Claim "azuredisk-7726" in namespace "pvc-4pfc9" doesn't exist in the system Jan 23 03:28:54.424: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-579sh [1mSTEP:[0m dump namespace information after failure [38;5;243m01/23/23 03:28:54.483[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/23/23 03:28:54.483[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [195.893 seconds][0m Dynamic Provisioning [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:41[0m [multi-az] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:48[0m [38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:747[0m ... skipping 7 lines ... Jan 23 03:25:39.542: INFO: >>> kubeConfig: /root/tmp1802577493/kubeconfig/kubeconfig.westus2.json [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:25:39.543[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:25:39.543[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:25:39.602[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:25:39.602[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:25:39.662[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:25:39.72[0m Jan 23 03:25:39.721: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vq8mp" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 23 03:25:39.777: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 56.66343ms Jan 23 03:25:41.837: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116511386s Jan 23 03:25:43.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115887894s Jan 23 03:25:45.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115527014s Jan 23 03:25:47.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115255584s Jan 23 03:25:49.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115622186s ... skipping 2 lines ... Jan 23 03:25:55.838: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.117319484s Jan 23 03:25:57.836: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115226676s Jan 23 03:25:59.835: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.1144377s Jan 23 03:26:01.840: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.119190079s Jan 23 03:26:03.843: INFO: Pod "azuredisk-volume-tester-vq8mp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.122632139s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:26:03.843[0m Jan 23 03:26:03.843: INFO: Pod "azuredisk-volume-tester-vq8mp" satisfied condition "Succeeded or Failed" [1mSTEP:[0m Checking Prow test resource group [38;5;243m01/23/23 03:26:03.844[0m [1mSTEP:[0m Prow test resource group: kubetest-oduib2ov [38;5;243m01/23/23 03:26:03.844[0m [1mSTEP:[0m Creating external resource group: azuredisk-csi-driver-test-aa893ef9-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:26:03.844[0m [1mSTEP:[0m creating volume snapshot class with external rg azuredisk-csi-driver-test-aa893ef9-9acd-11ed-95e5-36a1f62e17f0 [38;5;243m01/23/23 03:26:04.639[0m [1mSTEP:[0m setting up the VolumeSnapshotClass [38;5;243m01/23/23 03:26:04.639[0m [1mSTEP:[0m creating a VolumeSnapshotClass [38;5;243m01/23/23 03:26:04.639[0m ... skipping 10 lines ... [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:26:21.996[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:26:22.058[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:26:22.058[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:26:22.12[0m [1mSTEP:[0m Set pod anti-affinity to make sure two pods are scheduled on different nodes [38;5;243m01/23/23 03:26:22.12[0m [1mSTEP:[0m deploying a pod with a volume restored from the snapshot [38;5;243m01/23/23 03:26:22.12[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:26:22.18[0m Jan 23 03:26:22.180: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-94wh8" in namespace "azuredisk-7726" to be "Succeeded or Failed" Jan 23 03:26:22.241: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 60.347946ms Jan 23 03:26:24.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120702964s Jan 23 03:26:26.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120299213s Jan 23 03:26:28.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11913498s Jan 23 03:26:30.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120948959s Jan 23 03:26:32.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120871473s Jan 23 03:26:34.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.120141839s Jan 23 03:26:36.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.119670118s Jan 23 03:26:38.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.119438169s Jan 23 03:26:40.301: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.120922639s Jan 23 03:26:42.300: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.119773533s Jan 23 03:26:44.299: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Running", Reason="", readiness=true. Elapsed: 22.118439105s Jan 23 03:26:46.299: INFO: Pod "azuredisk-volume-tester-94wh8": Phase="Failed", Reason="", readiness=false. Elapsed: 24.118871513s Jan 23 03:26:46.300: INFO: Unexpected error: <*fmt.wrapError | 0xc0009a87a0>: { msg: "error while waiting for pod azuredisk-7726/azuredisk-volume-tester-94wh8 to be Succeeded or Failed: pod \"azuredisk-volume-tester-94wh8\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", err: <*errors.errorString | 0xc00052b830>{ s: "pod \"azuredisk-volume-tester-94wh8\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", }, } Jan 23 03:26:46.300: FAIL: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-94wh8 to be Succeeded or Failed: pod "azuredisk-volume-tester-94wh8" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*TestPod).WaitForSuccess(0x22517b7?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823 +0x5d sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites.(*DynamicallyProvisionedVolumeSnapshotTest).Run(0xc000c35d78, {0x270bd00, 0xc000294b60}, {0x26f6f00, 0xc00079fd60}, 0xc000c22b00?) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/dynamically_provisioned_volume_snapshot_tester.go:142 +0x1358 ... skipping 43 lines ... Jan 23 03:28:54.424: INFO: Claim "azuredisk-7726" in namespace "pvc-4pfc9" doesn't exist in the system Jan 23 03:28:54.424: INFO: deleting StorageClass azuredisk-7726-disk.csi.azure.com-dynamic-sc-579sh [1mSTEP:[0m dump namespace information after failure [38;5;243m01/23/23 03:28:54.483[0m [1mSTEP:[0m Destroying namespace "azuredisk-7726" for this suite. [38;5;243m01/23/23 03:28:54.483[0m [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mJan 23 03:26:46.300: error while waiting for pod azuredisk-7726/azuredisk-volume-tester-94wh8 to be Succeeded or Failed: pod "azuredisk-volume-tester-94wh8" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-23 03:26:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.32 PodIP:10.248.0.34 PodIPs:[{IP:10.248.0.34}] StartTime:2023-01-23 03:26:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-23 03:26:44 +0000 UTC,FinishedAt:2023-01-23 03:26:44 +0000 UTC,ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/busybox:1.29-4 ImageID:registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 ContainerID:containerd://f0acb947fcaa0a1e4d56ea1dda85c6c926f7314c5d9473d9ff5760fb26f2b5d6 Started:0xc000734bc0}] QOSClass:BestEffort EphemeralContainerStatuses:[]}[0m [38;5;9mIn [1m[It][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [1mThere were additional failures detected after the initial failure:[0m [38;5;13m[PANICKED][0m [38;5;13mTest Panicked[0m [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m [38;5;13mFull Stack Trace[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0000203c0) /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x179 ... skipping 25 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:28:55.602[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:28:55.661[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:28:55.661[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:28:55.719[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:28:55.72[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:28:55.785[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:28:55.844[0m Jan 23 03:28:55.844: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ph6tk" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 23 03:28:55.902: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 57.950898ms Jan 23 03:28:57.961: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117275807s Jan 23 03:28:59.960: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116178232s Jan 23 03:29:01.961: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117249s Jan 23 03:29:03.961: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116928562s Jan 23 03:29:05.962: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118207479s ... skipping 9 lines ... Jan 23 03:29:25.959: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 30.115718552s Jan 23 03:29:27.965: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 32.1209417s Jan 23 03:29:29.969: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.125411977s Jan 23 03:29:31.963: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119374648s Jan 23 03:29:33.964: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.120318501s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:29:33.964[0m Jan 23 03:29:33.964: INFO: Pod "azuredisk-volume-tester-ph6tk" satisfied condition "Succeeded or Failed" Jan 23 03:29:33.964: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-ph6tk" Jan 23 03:29:34.028: INFO: Pod azuredisk-volume-tester-ph6tk has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-ph6tk in namespace azuredisk-3086 [38;5;243m01/23/23 03:29:34.028[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:29:34.152[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:29:34.209[0m ... skipping 70 lines ... [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:28:55.602[0m [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:28:55.661[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:28:55.661[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:28:55.719[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:28:55.72[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:28:55.785[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:28:55.844[0m Jan 23 03:28:55.844: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ph6tk" in namespace "azuredisk-3086" to be "Succeeded or Failed" Jan 23 03:28:55.902: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 57.950898ms Jan 23 03:28:57.961: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117275807s Jan 23 03:28:59.960: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116178232s Jan 23 03:29:01.961: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117249s Jan 23 03:29:03.961: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116928562s Jan 23 03:29:05.962: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118207479s ... skipping 9 lines ... Jan 23 03:29:25.959: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 30.115718552s Jan 23 03:29:27.965: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 32.1209417s Jan 23 03:29:29.969: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.125411977s Jan 23 03:29:31.963: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119374648s Jan 23 03:29:33.964: INFO: Pod "azuredisk-volume-tester-ph6tk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.120318501s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:29:33.964[0m Jan 23 03:29:33.964: INFO: Pod "azuredisk-volume-tester-ph6tk" satisfied condition "Succeeded or Failed" Jan 23 03:29:33.964: INFO: deleting Pod "azuredisk-3086"/"azuredisk-volume-tester-ph6tk" Jan 23 03:29:34.028: INFO: Pod azuredisk-volume-tester-ph6tk has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-ph6tk in namespace azuredisk-3086 [38;5;243m01/23/23 03:29:34.028[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:29:34.152[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:29:34.209[0m ... skipping 974 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:43:48.122[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:43:48.122[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:43:48.181[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:43:48.181[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:43:48.242[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:43:48.242[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:43:48.302[0m Jan 23 03:43:48.302: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6xbnd" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 23 03:43:48.359: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 57.019071ms Jan 23 03:43:50.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115310391s Jan 23 03:43:52.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116121386s Jan 23 03:43:54.420: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117651898s Jan 23 03:43:56.419: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117094081s Jan 23 03:43:58.420: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117473551s ... skipping 10 lines ... Jan 23 03:44:20.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115399967s Jan 23 03:44:22.417: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.114941422s Jan 23 03:44:24.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115596518s Jan 23 03:44:26.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.116029012s Jan 23 03:44:28.419: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.116560446s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:44:28.419[0m Jan 23 03:44:28.419: INFO: Pod "azuredisk-volume-tester-6xbnd" satisfied condition "Succeeded or Failed" Jan 23 03:44:28.419: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-6xbnd" Jan 23 03:44:28.505: INFO: Pod azuredisk-volume-tester-6xbnd has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-6xbnd in namespace azuredisk-1092 [38;5;243m01/23/23 03:44:28.505[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:44:28.625[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:44:28.683[0m ... skipping 33 lines ... [1mSTEP:[0m setting up the StorageClass [38;5;243m01/23/23 03:43:48.122[0m [1mSTEP:[0m creating a StorageClass [38;5;243m01/23/23 03:43:48.122[0m [1mSTEP:[0m setting up the PVC and PV [38;5;243m01/23/23 03:43:48.181[0m [1mSTEP:[0m creating a PVC [38;5;243m01/23/23 03:43:48.181[0m [1mSTEP:[0m setting up the pod [38;5;243m01/23/23 03:43:48.242[0m [1mSTEP:[0m deploying the pod [38;5;243m01/23/23 03:43:48.242[0m [1mSTEP:[0m checking that the pod's command exits with no error [38;5;243m01/23/23 03:43:48.302[0m Jan 23 03:43:48.302: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6xbnd" in namespace "azuredisk-1092" to be "Succeeded or Failed" Jan 23 03:43:48.359: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 57.019071ms Jan 23 03:43:50.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115310391s Jan 23 03:43:52.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116121386s Jan 23 03:43:54.420: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117651898s Jan 23 03:43:56.419: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117094081s Jan 23 03:43:58.420: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117473551s ... skipping 10 lines ... Jan 23 03:44:20.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115399967s Jan 23 03:44:22.417: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.114941422s Jan 23 03:44:24.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.115596518s Jan 23 03:44:26.418: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.116029012s Jan 23 03:44:28.419: INFO: Pod "azuredisk-volume-tester-6xbnd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.116560446s [1mSTEP:[0m Saw pod success [38;5;243m01/23/23 03:44:28.419[0m Jan 23 03:44:28.419: INFO: Pod "azuredisk-volume-tester-6xbnd" satisfied condition "Succeeded or Failed" Jan 23 03:44:28.419: INFO: deleting Pod "azuredisk-1092"/"azuredisk-volume-tester-6xbnd" Jan 23 03:44:28.505: INFO: Pod azuredisk-volume-tester-6xbnd has the following logs: hello world [1mSTEP:[0m Deleting pod azuredisk-volume-tester-6xbnd in namespace azuredisk-1092 [38;5;243m01/23/23 03:44:28.505[0m [1mSTEP:[0m validating provisioned PV [38;5;243m01/23/23 03:44:28.625[0m [1mSTEP:[0m checking the PV [38;5;243m01/23/23 03:44:28.683[0m ... skipping 93 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0123 02:47:23.918250 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f e2e-test I0123 02:47:23.918756 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:47:23.941592 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:47:23.941611 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:47:23.941618 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:47:23.941643 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:47:23.942372 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:47:23.942420 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:47:23.942452 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0123 02:47:23.942792 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0123 02:47:23.942811 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0123 02:47:23.942819 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0123 02:47:23.942894 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0123 02:47:23.942913 1 azuredisk.go:193] disable UseInstanceMetadata for controller I0123 02:47:23.942923 1 azuredisk.go:205] cloud: AzurePublicCloud, location: westus2, rg: kubetest-oduib2ov, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-27089192-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0123 02:47:23.946009 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount48353742' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount48353742: must be superuser to unmount. I0123 02:47:23.946035 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0123 02:47:23.946098 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0123 02:47:23.946111 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0123 02:47:23.946118 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0123 02:47:23.946124 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0123 02:47:23.946130 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 61 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0123 02:47:23.699913 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f e2e-test I0123 02:47:23.700504 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:47:23.727260 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:47:23.727283 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:47:23.727292 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:47:23.727322 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:47:23.728376 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:47:23.728424 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:47:23.728493 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 25 lines ... I0123 02:47:23.728845 1 azure_blobclient.go:67] Azure BlobClient using API version: 2021-09-01 I0123 02:47:23.728865 1 azure_vmasclient.go:70] Azure AvailabilitySetsClient (read ops) using rate limit config: QPS=6, bucket=20 I0123 02:47:23.728873 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0123 02:47:23.728988 1 azure.go:1007] attach/detach disk operation rate limit QPS: 6.000000, Bucket: 10 I0123 02:47:23.729013 1 azuredisk.go:193] disable UseInstanceMetadata for controller I0123 02:47:23.729021 1 azuredisk.go:205] cloud: AzurePublicCloud, location: westus2, rg: kubetest-oduib2ov, VMType: vmss, PrimaryScaleSetName: k8s-agentpool-27089192-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false I0123 02:47:23.732449 1 mount_linux.go:287] 'umount /tmp/kubelet-detect-safe-umount379141026' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount379141026: must be superuser to unmount. I0123 02:47:23.732530 1 mount_linux.go:289] Detected umount with unsafe 'not mounted' behavior I0123 02:47:23.732601 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0123 02:47:23.732611 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0123 02:47:23.732618 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0123 02:47:23.732625 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0123 02:47:23.732646 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 68 lines ... I0123 02:47:32.940109 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0123 02:47:33.070403 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 32357 I0123 02:47:33.074305 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-e1ab605b-92a7-488b-b67e-dd1064746091. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:47:33.074379 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091 to node k8s-agentpool-27089192-vmss000000 I0123 02:47:33.074432 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091:%!s(*provider.AttachDiskOptions=&{None pvc-e1ab605b-92a7-488b-b67e-dd1064746091 false 0})] I0123 02:47:33.074474 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091:%!s(*provider.AttachDiskOptions=&{None pvc-e1ab605b-92a7-488b-b67e-dd1064746091 false 0})]) I0123 02:47:33.892420 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091:%!s(*provider.AttachDiskOptions=&{None pvc-e1ab605b-92a7-488b-b67e-dd1064746091 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:47:44.044277 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:47:44.044355 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:47:44.044404 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091 attached to node k8s-agentpool-27089192-vmss000000. I0123 02:47:44.044422 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091 to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:47:44.044516 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=11.207947401 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:47:44.044583 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0123 02:48:39.467386 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091) returned with <nil> I0123 02:48:39.467422 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.285309106 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e1ab605b-92a7-488b-b67e-dd1064746091" result_code="succeeded" I0123 02:48:39.467438 1 utils.go:84] GRPC response: {} I0123 02:48:44.771054 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0123 02:48:44.771076 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}},{"segments":{"topology.disk.csi.azure.com/zone":"westus2-2","topology.kubernetes.io/zone":"westus2-2"}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d","parameters":{"csi.storage.k8s.io/pv/name":"pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d","csi.storage.k8s.io/pvc/name":"pvc-sk45b","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","networkAccessPolicy":"DenyAll","skuName":"Standard_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0123 02:48:44.771746 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:48:44.779268 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:48:44.779344 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:48:44.779475 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:48:44.779595 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:48:44.780109 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:48:44.780160 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:48:44.780185 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0123 02:48:49.136600 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d","csi.storage.k8s.io/pvc/name":"pvc-sk45b","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","enableasyncattach":"false","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d"} I0123 02:48:49.162108 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1218 I0123 02:48:49.162914 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:48:49.162970 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d to node k8s-agentpool-27089192-vmss000000 I0123 02:48:49.163100 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d false 0})] I0123 02:48:49.163195 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d false 0})]) I0123 02:48:49.312540 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:48:59.424645 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:48:59.424799 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:48:59.424863 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d attached to node k8s-agentpool-27089192-vmss000000. I0123 02:48:59.424887 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:48:59.424951 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.262135147 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:48:59.424978 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0123 02:49:45.629979 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d) returned with <nil> I0123 02:49:45.630073 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.219242055 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d2c09688-1a9c-4817-99d7-32e04a7dbd4d" result_code="succeeded" I0123 02:49:45.630092 1 utils.go:84] GRPC response: {} I0123 02:49:51.108167 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0123 02:49:51.108204 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":"westus2-1","topology.kubernetes.io/zone":"westus2-1"}}]},"capacity_range":{"required_bytes":1099511627776},"name":"pvc-f0985558-082e-4278-9dee-042ad8e1f7c3","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f0985558-082e-4278-9dee-042ad8e1f7c3","csi.storage.k8s.io/pvc/name":"pvc-79xst","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","perfProfile":"Basic","skuName":"Premium_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0123 02:49:51.108977 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:49:51.115969 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:49:51.116001 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:49:51.116011 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:49:51.116319 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:49:51.117028 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:49:51.117105 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:49:51.117343 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 37 lines ... I0123 02:49:54.205488 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-f0985558-082e-4278-9dee-042ad8e1f7c3","csi.storage.k8s.io/pvc/name":"pvc-79xst","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","enableasyncattach":"false","perfProfile":"Basic","requestedsizegib":"1024","skuName":"Premium_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com","userAgent":"azuredisk-e2e-test"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3"} I0123 02:49:54.280430 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1338 I0123 02:49:54.280864 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f0985558-082e-4278-9dee-042ad8e1f7c3. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:49:54.280899 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 to node k8s-agentpool-27089192-vmss000000 I0123 02:49:54.280938 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 false 0})] I0123 02:49:54.281036 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 false 0})]) I0123 02:49:54.402891 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:50:04.518934 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:50:04.518974 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:50:04.518999 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 attached to node k8s-agentpool-27089192-vmss000000. I0123 02:50:04.519016 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3 to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:50:04.519063 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.238201805 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f0985558-082e-4278-9dee-042ad8e1f7c3" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:50:04.519084 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 32 lines ... I0123 02:51:42.699933 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5","csi.storage.k8s.io/pvc/name":"pvc-brz4l","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5"} I0123 02:51:42.723526 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 02:51:42.723889 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:51:42.724011 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 to node k8s-agentpool-27089192-vmss000000 I0123 02:51:42.724132 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 false 0})] I0123 02:51:42.724234 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 false 0})]) I0123 02:51:42.863987 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:52:18.104834 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:52:18.104880 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:52:18.104905 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 attached to node k8s-agentpool-27089192-vmss000000. I0123 02:52:18.104921 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5 to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:52:18.104972 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.381076149 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:52:18.104992 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 53 lines ... I0123 02:54:10.452399 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-16089001-53d1-4708-9415-b3d9a8d37f8a","csi.storage.k8s.io/pvc/name":"pvc-k57mj","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a"} I0123 02:54:10.609480 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 02:54:10.609790 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-16089001-53d1-4708-9415-b3d9a8d37f8a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:54:10.609854 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a to node k8s-agentpool-27089192-vmss000000 I0123 02:54:10.609917 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-16089001-53d1-4708-9415-b3d9a8d37f8a false 0})] I0123 02:54:10.609963 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-16089001-53d1-4708-9415-b3d9a8d37f8a false 0})]) I0123 02:54:10.807989 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-16089001-53d1-4708-9415-b3d9a8d37f8a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:54:51.112738 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:54:51.112798 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:54:51.112838 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a attached to node k8s-agentpool-27089192-vmss000000. I0123 02:54:51.112855 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:54:51.112901 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=40.503113453 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:54:51.112926 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 40 lines ... I0123 02:56:08.109946 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931","csi.storage.k8s.io/pvc/name":"pvc-62btq","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931"} I0123 02:56:08.133171 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0123 02:56:08.133690 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:56:08.133727 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 to node k8s-agentpool-27089192-vmss000000 I0123 02:56:08.133773 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 false 0})] I0123 02:56:08.133820 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 false 0})]) I0123 02:56:08.307618 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:56:48.546401 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:56:48.546437 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:56:48.546458 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 attached to node k8s-agentpool-27089192-vmss000000. I0123 02:56:48.546474 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931 to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:56:48.546516 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=40.41283062 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:56:48.546541 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 61 lines ... I0123 02:58:20.769560 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c to node k8s-agentpool-27089192-vmss000000 I0123 02:58:20.769678 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c false 0})] I0123 02:58:20.769735 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c false 0})]) I0123 02:58:20.774841 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1238 I0123 02:58:20.775152 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8968fcc0-c09f-4304-beb1-f340369a2261. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 02:58:20.775184 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261 to node k8s-agentpool-27089192-vmss000000 I0123 02:58:20.916356 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:58:56.146836 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:58:56.146879 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:58:56.146914 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c attached to node k8s-agentpool-27089192-vmss000000. I0123 02:58:56.146930 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:58:56.146977 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.377439037 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c8759a9d-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-c2f599bc-dad2-47a4-8663-d0da106d3c1c" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:58:56.146999 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0123 02:58:56.147132 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261 lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8968fcc0-c09f-4304-beb1-f340369a2261 false 1})] I0123 02:58:56.147182 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8968fcc0-c09f-4304-beb1-f340369a2261 false 1})]) I0123 02:58:56.274759 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/microsoft.compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8968fcc0-c09f-4304-beb1-f340369a2261 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 02:59:06.386586 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 02:59:06.386626 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 02:59:06.386650 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261 attached to node k8s-agentpool-27089192-vmss000000. I0123 02:59:06.386684 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261 to node k8s-agentpool-27089192-vmss000000 successfully I0123 02:59:06.386732 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=45.611557122 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-c90684db-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-8968fcc0-c09f-4304-beb1-f340369a2261" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 02:59:06.386770 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 66 lines ... I0123 03:01:58.227579 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f","csi.storage.k8s.io/pvc/name":"pvc-tq86g","csi.storage.k8s.io/pvc/namespace":"azuredisk-1353","requestedsizegib":"10","skuName":"Premium_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f"} I0123 03:01:58.248679 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1206 I0123 03:01:58.249074 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:01:58.249104 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f to node k8s-agentpool-27089192-vmss000000 I0123 03:01:58.249182 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f false 0})] I0123 03:01:58.249308 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f false 0})]) I0123 03:01:58.403528 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:02:33.715894 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:02:33.715941 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:02:33.715963 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f attached to node k8s-agentpool-27089192-vmss000000. I0123 03:02:33.716163 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:02:33.716219 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.467132687 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-54deb583-e770-4bd7-a0c0-10c2d618d69f" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:02:33.716235 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 47 lines ... I0123 03:03:52.499358 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-oduib2ov:{}] I0123 03:03:52.520199 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0123 03:03:52.520426 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-3515a064-b1e8-407d-a986-b4b240d3a187. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:03:52.520503 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 to node k8s-agentpool-27089192-vmss000000 I0123 03:03:52.520550 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3515a064-b1e8-407d-a986-b4b240d3a187 false 0})] I0123 03:03:52.520664 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3515a064-b1e8-407d-a986-b4b240d3a187 false 0})]) I0123 03:03:52.703942 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3515a064-b1e8-407d-a986-b4b240d3a187 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:04:27.967021 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:04:27.967064 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:04:27.967088 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:04:27.967103 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:04:27.967152 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.467789132 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:04:27.967178 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0123 03:05:04.709583 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-258d28c4-5a29-4f6e-a89e-43182f764eba","csi.storage.k8s.io/pvc/name":"pvc-ns5gs","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba"} I0123 03:05:04.731386 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:05:04.731788 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-258d28c4-5a29-4f6e-a89e-43182f764eba. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:05:04.731820 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba to node k8s-agentpool-27089192-vmss000000 I0123 03:05:04.731859 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-258d28c4-5a29-4f6e-a89e-43182f764eba false 1})] I0123 03:05:04.731905 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-258d28c4-5a29-4f6e-a89e-43182f764eba false 1})]) I0123 03:05:04.876636 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-258d28c4-5a29-4f6e-a89e-43182f764eba false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:05:15.046401 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:05:15.046443 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:05:15.046466 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba attached to node k8s-agentpool-27089192-vmss000000. I0123 03:05:15.046485 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:05:15.046534 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.314773083 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-258d28c4-5a29-4f6e-a89e-43182f764eba" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:05:15.046558 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 11 lines ... I0123 03:05:27.000316 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1","csi.storage.k8s.io/pvc/name":"pvc-lkgg4","csi.storage.k8s.io/pvc/namespace":"azuredisk-2888","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1"} I0123 03:05:27.036444 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:05:27.036988 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 to node k8s-agentpool-27089192-vmss000001 (vmState Succeeded). I0123 03:05:27.037022 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 to node k8s-agentpool-27089192-vmss000001 I0123 03:05:27.037066 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 lun 0 to node k8s-agentpool-27089192-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 false 0})] I0123 03:05:27.037124 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 false 0})]) I0123 03:05:27.230306 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:05:42.404672 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000001) successfully I0123 03:05:42.404709 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000001) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:05:42.404732 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 attached to node k8s-agentpool-27089192-vmss000001. I0123 03:05:42.404926 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1 to node k8s-agentpool-27089192-vmss000001 successfully I0123 03:05:42.405017 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.368016726 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3aea8140-4628-4e18-b4b8-ebb554ff31f1" node="k8s-agentpool-27089192-vmss000001" result_code="succeeded" I0123 03:05:42.405070 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 80 lines ... I0123 03:08:48.598142 1 azure_controller_common.go:398] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 from node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:pvc-3515a064-b1e8-407d-a986-b4b240d3a187] E0123 03:08:48.598193 1 azure_controller_vmss.go:202] detach azure disk on node(k8s-agentpool-27089192-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:pvc-3515a064-b1e8-407d-a986-b4b240d3a187]) not found I0123 03:08:48.598210 1 azure_controller_vmss.go:239] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:pvc-3515a064-b1e8-407d-a986-b4b240d3a187]) I0123 03:08:51.057665 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0123 03:08:51.057695 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187"} I0123 03:08:51.057808 1 controllerserver.go:317] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187) I0123 03:08:51.057827 1 controllerserver.go:319] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187) since it's in attaching or detaching state I0123 03:08:51.057892 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=3.74e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187" result_code="failed_csi_driver_controller_delete_volume" E0123 03:08:51.057914 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187) since it's in attaching or detaching state I0123 03:08:53.777479 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187:pvc-3515a064-b1e8-407d-a986-b4b240d3a187]) returned with <nil> I0123 03:08:53.777555 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:08:53.777880 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:08:53.777905 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-3515a064-b1e8-407d-a986-b4b240d3a187, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187) succeeded I0123 03:08:53.777954 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187 from node k8s-agentpool-27089192-vmss000000 successfully I0123 03:08:53.778034 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.180039091 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3515a064-b1e8-407d-a986-b4b240d3a187" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" ... skipping 19 lines ... I0123 03:09:21.177703 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75","csi.storage.k8s.io/pvc/name":"pvc-9bqcm","csi.storage.k8s.io/pvc/namespace":"azuredisk-156","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75"} I0123 03:09:21.208007 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1221 I0123 03:09:21.208498 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:09:21.208626 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 to node k8s-agentpool-27089192-vmss000000 I0123 03:09:21.208700 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 false 0})] I0123 03:09:21.208819 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 false 0})]) I0123 03:09:21.379048 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:09:31.606956 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:09:31.606999 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:09:31.607026 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:09:31.607043 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:09:31.607618 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.399086593 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-47329ca6-bb2b-4e07-9af9-aaf94478eb75" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:09:31.607650 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 57 lines ... I0123 03:11:59.453427 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2e6eabe5-919f-426c-a946-5df62ac0400a","csi.storage.k8s.io/pvc/name":"pvc-p7pfp","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a"} I0123 03:11:59.474683 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1217 I0123 03:11:59.475040 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2e6eabe5-919f-426c-a946-5df62ac0400a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:11:59.475073 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a to node k8s-agentpool-27089192-vmss000000 I0123 03:11:59.475108 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2e6eabe5-919f-426c-a946-5df62ac0400a false 0})] I0123 03:11:59.475152 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2e6eabe5-919f-426c-a946-5df62ac0400a false 0})]) I0123 03:11:59.625970 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2e6eabe5-919f-426c-a946-5df62ac0400a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:12:09.756900 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:12:09.756941 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:12:09.757030 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a attached to node k8s-agentpool-27089192-vmss000000. I0123 03:12:09.757048 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:12:09.757094 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.282046708 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:12:09.757127 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 11 lines ... I0123 03:12:30.890241 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd","csi.storage.k8s.io/pvc/name":"pvc-6r58v","csi.storage.k8s.io/pvc/namespace":"azuredisk-59","fsType":"xfs","requestedsizegib":"10","skuName":"Standard_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd"} I0123 03:12:30.911925 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1471 I0123 03:12:30.912324 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:12:30.912360 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd to node k8s-agentpool-27089192-vmss000000 I0123 03:12:30.912397 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd false 1})] I0123 03:12:30.912441 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd false 1})]) I0123 03:12:31.069591 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:12:33.525957 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0123 03:12:33.526004 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a"} I0123 03:12:33.526198 1 controllerserver.go:471] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2e6eabe5-919f-426c-a946-5df62ac0400a from node k8s-agentpool-27089192-vmss000000 I0123 03:12:41.185068 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:12:41.185139 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:12:41.185171 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a041a5f6-48c4-4b79-9c1b-ca9c967c63cd attached to node k8s-agentpool-27089192-vmss000000. ... skipping 74 lines ... I0123 03:13:47.577098 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c","csi.storage.k8s.io/pvc/name":"pvc-27wqq","csi.storage.k8s.io/pvc/namespace":"azuredisk-2546","fsType":"xfs","networkAccessPolicy":"DenyAll","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c"} I0123 03:13:47.598418 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0123 03:13:47.598999 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:13:47.599219 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c to node k8s-agentpool-27089192-vmss000000 I0123 03:13:47.599315 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c false 0})] I0123 03:13:47.599415 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c false 0})]) I0123 03:13:47.748619 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:14:58.133056 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:14:58.133091 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:14:58.133110 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c attached to node k8s-agentpool-27089192-vmss000000. I0123 03:14:58.133124 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:14:58.133167 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=70.534191 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:14:58.133249 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 33 lines ... I0123 03:16:29.051923 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c) succeeded I0123 03:16:29.051945 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c from node k8s-agentpool-27089192-vmss000000 successfully I0123 03:16:29.051991 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.246560306 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5f8cbd13-495a-4f2f-a7b1-08d74f02571c" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:16:29.052012 1 utils.go:84] GRPC response: {} I0123 03:16:29.052052 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5 false 0})] I0123 03:16:29.052085 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5 false 0})]) I0123 03:16:29.225338 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:16:39.389870 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:16:39.389907 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:16:39.389927 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:16:39.389940 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:16:39.389981 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=22.315633487 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-dea05f8f-fb09-4d6c-9401-723dd17aade5" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:16:39.390007 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 70 lines ... I0123 03:17:48.152845 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-78810b38-9780-4586-b13f-5aa0b4e89edd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-78810b38-9780-4586-b13f-5aa0b4e89edd false 0})]) I0123 03:17:48.152443 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:17:48.153324 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 to node k8s-agentpool-27089192-vmss000000 I0123 03:17:48.151820 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:17:48.153890 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:17:48.153927 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b to node k8s-agentpool-27089192-vmss000000 I0123 03:17:49.009337 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-78810b38-9780-4586-b13f-5aa0b4e89edd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-78810b38-9780-4586-b13f-5aa0b4e89edd false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:17:59.137057 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:17:59.137093 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:17:59.137137 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-78810b38-9780-4586-b13f-5aa0b4e89edd attached to node k8s-agentpool-27089192-vmss000000. I0123 03:17:59.137154 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-78810b38-9780-4586-b13f-5aa0b4e89edd to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:17:59.137201 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.984770656 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-78810b38-9780-4586-b13f-5aa0b4e89edd" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:17:59.137226 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0123 03:17:59.137490 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b false 2})] I0123 03:17:59.137538 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b false 2})]) I0123 03:17:59.301047 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2b44deed-bd74-4a55-b4d6-160f6901d04b false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:18:14.436592 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:18:14.436652 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:18:14.436683 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:18:14.436698 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:18:14.436836 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=26.284325784 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-10f7c8fd-8b8e-469a-a5b1-11117af59058" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:18:14.436864 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 127 lines ... I0123 03:19:59.805389 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada to node k8s-agentpool-27089192-vmss000000 I0123 03:19:59.805533 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada false 0})] I0123 03:19:59.805627 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada false 0})]) I0123 03:19:59.814739 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:19:59.815090 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:19:59.815584 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 to node k8s-agentpool-27089192-vmss000000 I0123 03:20:00.035998 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:20:40.319518 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:20:40.319557 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:20:40.319587 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada attached to node k8s-agentpool-27089192-vmss000000. I0123 03:20:40.319601 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:20:40.319644 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=40.51428147 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:20:40.319660 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 4 lines ... I0123 03:20:40.378280 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1421 I0123 03:20:40.379117 1 azure_controller_common.go:516] azureDisk - find disk: lun 0 name pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada I0123 03:20:40.379272 1 controllerserver.go:383] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:20:40.379433 1 controllerserver.go:398] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada is already attached to node k8s-agentpool-27089192-vmss000000 at lun 0. I0123 03:20:40.379919 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=0.000800194 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30e4eac0-63e3-43fa-a64f-f588fadd0ada" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:20:40.380099 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0123 03:20:40.478699 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:20:50.557815 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:20:50.557861 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:20:50.557886 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:20:50.557937 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:20:50.558022 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=50.742877598 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:20:50.558055 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 79 lines ... I0123 03:22:02.484274 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94","csi.storage.k8s.io/pvc/name":"pvc-b2nzg","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94"} I0123 03:22:02.506452 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:22:02.506816 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:22:02.506857 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 to node k8s-agentpool-27089192-vmss000000 I0123 03:22:02.506900 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 false 0})] I0123 03:22:02.506944 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 false 0})]) I0123 03:22:02.638065 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:22:37.888716 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:22:37.888756 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:22:37.888779 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:22:37.888793 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:22:37.888837 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=35.382030415 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:22:37.888861 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 34 lines ... I0123 03:23:31.517153 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-76e5f048-b08b-45fa-843e-101498fa595b","csi.storage.k8s.io/pvc/name":"pvc-r5pj8","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b"} I0123 03:23:31.563930 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0123 03:23:31.564344 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-76e5f048-b08b-45fa-843e-101498fa595b. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:23:31.564375 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b to node k8s-agentpool-27089192-vmss000000 I0123 03:23:31.564592 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-76e5f048-b08b-45fa-843e-101498fa595b false 0})] I0123 03:23:31.564683 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-76e5f048-b08b-45fa-843e-101498fa595b false 0})]) I0123 03:23:31.700690 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-76e5f048-b08b-45fa-843e-101498fa595b false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:23:41.846493 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:23:41.846529 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:23:41.846548 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b attached to node k8s-agentpool-27089192-vmss000000. I0123 03:23:41.846562 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:23:41.846744 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.282252795 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-76e5f048-b08b-45fa-843e-101498fa595b" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:23:41.846767 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 54 lines ... I0123 03:25:42.748967 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8","csi.storage.k8s.io/pvc/name":"pvc-4pfc9","csi.storage.k8s.io/pvc/namespace":"azuredisk-7726","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8"} I0123 03:25:42.783124 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:25:42.783561 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:25:42.783594 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 to node k8s-agentpool-27089192-vmss000000 I0123 03:25:42.783675 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 false 0})] I0123 03:25:42.783784 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 false 0})]) I0123 03:25:42.923802 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:25:58.068727 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:25:58.068820 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:25:58.068842 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:25:58.068874 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:25:58.068988 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.285370714999999 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e63733e1-ee07-42ab-98d6-f8c53a745da8" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:25:58.069005 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 29 lines ... I0123 03:26:25.243759 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd","csi.storage.k8s.io/pvc/name":"pvc-6s7gz","csi.storage.k8s.io/pvc/namespace":"azuredisk-7726","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd"} I0123 03:26:25.264437 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1501 I0123 03:26:25.264870 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd to node k8s-agentpool-27089192-vmss000001 (vmState Succeeded). I0123 03:26:25.264903 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd to node k8s-agentpool-27089192-vmss000001 I0123 03:26:25.264971 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd lun 0 to node k8s-agentpool-27089192-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd false 0})] I0123 03:26:25.265100 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd false 0})]) I0123 03:26:25.394121 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:26:40.535460 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000001) successfully I0123 03:26:40.535498 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000001) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:26:40.535517 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd attached to node k8s-agentpool-27089192-vmss000001. I0123 03:26:40.535548 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd to node k8s-agentpool-27089192-vmss000001 successfully I0123 03:26:40.535607 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.270718506 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-6ddb27e0-de26-4a12-8de4-b15d5da0d1bd" node="k8s-agentpool-27089192-vmss000001" result_code="succeeded" I0123 03:26:40.535652 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 95 lines ... I0123 03:28:59.041710 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e55c121f-9c66-4d3e-bf36-98318416249d false 0})] I0123 03:28:59.041815 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e55c121f-9c66-4d3e-bf36-98318416249d false 0})]) I0123 03:28:59.041992 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:28:59.042141 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca to node k8s-agentpool-27089192-vmss000000 I0123 03:28:59.042182 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:28:59.042554 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a to node k8s-agentpool-27089192-vmss000000 I0123 03:28:59.806355 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e55c121f-9c66-4d3e-bf36-98318416249d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:29:09.936055 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:29:09.936096 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:29:09.936128 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d attached to node k8s-agentpool-27089192-vmss000000. I0123 03:29:09.936143 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:29:09.936187 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.95956248 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-e55c121f-9c66-4d3e-bf36-98318416249d" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:29:09.936210 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0123 03:29:09.936372 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a false 2})] I0123 03:29:09.936444 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a false 2})]) I0123 03:29:10.130061 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f723341e-7f9c-4332-8f77-a8fd17bc922a false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:29:20.243744 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:29:20.243785 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:29:20.243820 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca attached to node k8s-agentpool-27089192-vmss000000. I0123 03:29:20.243872 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:29:20.243923 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=21.266207816 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c15a37e3-9091-42a8-a6e3-26770a4f0bca" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:29:20.243975 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 95 lines ... I0123 03:31:10.355487 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5d6bf689-bf7e-473e-b483-c1e134c16719","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-hgltn-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719"} I0123 03:31:10.409754 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0123 03:31:10.410053 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5d6bf689-bf7e-473e-b483-c1e134c16719. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:31:10.410087 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 to node k8s-agentpool-27089192-vmss000000 I0123 03:31:10.410123 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 false 0})] I0123 03:31:10.410161 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 false 0})]) I0123 03:31:10.590988 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:31:25.738123 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:31:25.738183 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:31:25.738221 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:31:25.738237 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:31:25.738282 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.328231355 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:31:25.738300 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 27 lines ... I0123 03:34:02.920346 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5d6bf689-bf7e-473e-b483-c1e134c16719","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-hgltn-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1387","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719"} I0123 03:34:02.950593 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0123 03:34:02.950979 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-5d6bf689-bf7e-473e-b483-c1e134c16719. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:34:02.951007 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 to node k8s-agentpool-27089192-vmss000000 I0123 03:34:02.951077 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 false 0})] I0123 03:34:02.951171 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 false 0})]) I0123 03:34:03.234260 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:34:13.347636 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:34:13.347681 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:34:13.347704 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:34:13.347719 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:34:13.347762 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.396777559 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-5d6bf689-bf7e-473e-b483-c1e134c16719" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:34:13.347783 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 13 lines ... I0123 03:34:39.114059 1 azure_vmss_cache.go:327] refresh the cache of NonVmssUniformNodesCache in rg map[kubetest-oduib2ov:{}] I0123 03:34:39.136291 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 12 I0123 03:34:39.136524 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:34:39.136559 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 to node k8s-agentpool-27089192-vmss000000 I0123 03:34:39.136758 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 false 1})] I0123 03:34:39.136811 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 false 1})]) I0123 03:34:39.292283 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:34:49.371611 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:34:49.371667 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:34:49.371687 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:34:49.371700 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:34:49.371739 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.25766081 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-d14269f9-cd73-4a9e-8f20-8b659cf791e5" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:34:49.371758 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 61 lines ... I0123 03:36:04.049498 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-30b105e6-c612-4f94-a74e-a4503a287612","csi.storage.k8s.io/pvc/name":"pvc-gj5kd","csi.storage.k8s.io/pvc/namespace":"azuredisk-8154","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612"} I0123 03:36:04.072315 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1193 I0123 03:36:04.072901 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-30b105e6-c612-4f94-a74e-a4503a287612. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:36:04.072958 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612 to node k8s-agentpool-27089192-vmss000000 I0123 03:36:04.073108 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-30b105e6-c612-4f94-a74e-a4503a287612 false 0})] I0123 03:36:04.073205 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-30b105e6-c612-4f94-a74e-a4503a287612 false 0})]) I0123 03:36:04.245158 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-30b105e6-c612-4f94-a74e-a4503a287612 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:36:14.325872 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:36:14.325913 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:36:14.325968 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:36:14.325990 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:36:14.326112 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.253171673 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-30b105e6-c612-4f94-a74e-a4503a287612" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:36:14.326180 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 39 lines ... I0123 03:37:40.611243 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-jhsnp-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1166","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac"} I0123 03:37:40.633522 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0123 03:37:40.634097 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:37:40.634137 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac to node k8s-agentpool-27089192-vmss000000 I0123 03:37:40.634180 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac false 0})] I0123 03:37:40.634295 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac false 0})]) I0123 03:37:40.801730 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:37:50.884998 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:37:50.885053 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:37:50.885075 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac attached to node k8s-agentpool-27089192-vmss000000. I0123 03:37:50.885253 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:37:50.885310 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.251217447 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:37:50.885328 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 22 lines ... I0123 03:38:50.841636 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac","csi.storage.k8s.io/pvc/name":"pvc-azuredisk-volume-tester-jhsnp-0","csi.storage.k8s.io/pvc/namespace":"azuredisk-1166","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac"} I0123 03:38:50.863251 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0123 03:38:50.863604 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:38:50.863636 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac to node k8s-agentpool-27089192-vmss000000 I0123 03:38:50.863678 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac false 0})] I0123 03:38:50.863777 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac false 0})]) I0123 03:38:51.018788 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:39:01.169195 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 24989 I0123 03:39:01.171811 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:39:01.171841 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:39:01.171885 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac attached to node k8s-agentpool-27089192-vmss000000. I0123 03:39:01.171909 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:39:01.171979 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.308345732 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-3e16c31c-f818-4c8e-a54c-d801d72237ac" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" ... skipping 20 lines ... I0123 03:39:18.345318 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90","csi.storage.k8s.io/pvc/name":"pvc-jmd9w","csi.storage.k8s.io/pvc/namespace":"azuredisk-783","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90"} I0123 03:39:18.366387 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0123 03:39:18.366894 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:39:18.366940 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 to node k8s-agentpool-27089192-vmss000000 I0123 03:39:18.366978 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 false 1})] I0123 03:39:18.367047 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 false 1})]) I0123 03:39:18.533228 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:39:28.830546 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:39:28.830584 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:39:28.830606 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:39:28.830622 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:39:28.830675 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.463789911 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:39:28.830705 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 41 lines ... I0123 03:40:25.965826 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000001","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90","csi.storage.k8s.io/pvc/name":"pvc-jmd9w","csi.storage.k8s.io/pvc/namespace":"azuredisk-783","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90"} I0123 03:40:26.037660 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1192 I0123 03:40:26.038071 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 to node k8s-agentpool-27089192-vmss000001 (vmState Succeeded). I0123 03:40:26.038104 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 to node k8s-agentpool-27089192-vmss000001 I0123 03:40:26.038142 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 lun 0 to node k8s-agentpool-27089192-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 false 0})] I0123 03:40:26.038181 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 false 0})]) I0123 03:40:26.233517 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:40:30.829608 1 azure_controller_vmss.go:252] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90:pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90]) returned with <nil> I0123 03:40:30.829672 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:40:30.829692 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:40:30.829705 1 azure_controller_common.go:422] azureDisk - detach disk(pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90) succeeded I0123 03:40:30.829720 1 controllerserver.go:480] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90 from node k8s-agentpool-27089192-vmss000000 successfully I0123 03:40:30.829770 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=5.265396738 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-8ca3fbd1-8fe9-4b69-b929-63217156ef90" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" ... skipping 39 lines ... I0123 03:41:59.274298 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-ae9fb660-9b31-4f44-9269-dc8c94463607","csi.storage.k8s.io/pvc/name":"pvc-9vx95","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607"} I0123 03:41:59.305127 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1214 I0123 03:41:59.305468 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ae9fb660-9b31-4f44-9269-dc8c94463607. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:41:59.305509 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 to node k8s-agentpool-27089192-vmss000000 I0123 03:41:59.305553 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607:%!s(*provider.AttachDiskOptions=&{None pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 false 0})] I0123 03:41:59.305604 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607:%!s(*provider.AttachDiskOptions=&{None pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 false 0})]) I0123 03:41:59.477385 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607:%!s(*provider.AttachDiskOptions=&{None pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:42:00.969640 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0123 03:42:00.969670 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":5}},"volume_context":{"cachingmode":"None","csi.storage.k8s.io/pv/name":"pvc-ae9fb660-9b31-4f44-9269-dc8c94463607","csi.storage.k8s.io/pvc/name":"pvc-9vx95","csi.storage.k8s.io/pvc/namespace":"azuredisk-7920","maxshares":"2","requestedsizegib":"10","skuname":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607"} I0123 03:42:01.115320 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1692 I0123 03:42:01.115705 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-ae9fb660-9b31-4f44-9269-dc8c94463607. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 to node k8s-agentpool-27089192-vmss000001 (vmState Succeeded). I0123 03:42:01.115747 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 to node k8s-agentpool-27089192-vmss000001 I0123 03:42:01.115789 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 lun 0 to node k8s-agentpool-27089192-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607:%!s(*provider.AttachDiskOptions=&{None pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 false 0})] I0123 03:42:01.115832 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607:%!s(*provider.AttachDiskOptions=&{None pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 false 0})]) I0123 03:42:01.280523 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607:%!s(*provider.AttachDiskOptions=&{None pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:42:09.737493 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:42:09.737547 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:42:09.737570 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:42:09.737839 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:42:09.737999 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.432432073 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:42:09.738034 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 71 lines ... I0123 03:43:51.387822 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-87476c85-d959-44b0-b000-ee17b2b64b6e","csi.storage.k8s.io/pvc/name":"pvc-g8sgd","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e"} I0123 03:43:51.439811 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1222 I0123 03:43:51.440254 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-87476c85-d959-44b0-b000-ee17b2b64b6e. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:43:51.440288 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e to node k8s-agentpool-27089192-vmss000000 I0123 03:43:51.440349 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-87476c85-d959-44b0-b000-ee17b2b64b6e false 0})] I0123 03:43:51.440503 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-87476c85-d959-44b0-b000-ee17b2b64b6e false 0})]) I0123 03:43:51.639598 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-87476c85-d959-44b0-b000-ee17b2b64b6e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:44:06.779189 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:44:06.779229 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:44:06.779245 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e attached to node k8s-agentpool-27089192-vmss000000. I0123 03:44:06.779254 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:44:06.779293 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.339039991 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:44:06.779312 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 31 lines ... I0123 03:45:14.828163 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd"} I0123 03:45:14.852049 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1219 I0123 03:45:14.852434 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:45:14.852760 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd to node k8s-agentpool-27089192-vmss000000 I0123 03:45:14.852842 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd lun 0 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd false 0})] I0123 03:45:14.852881 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd false 0})]) I0123 03:45:14.990912 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:45:25.186436 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:45:25.186578 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:45:25.186719 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd attached to node k8s-agentpool-27089192-vmss000000. I0123 03:45:25.186801 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:45:25.186891 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.334457852 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-fe51d7c4-a8e8-4b12-bf6a-162345e3e5fd" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:45:25.186951 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0123 03:45:40.172721 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000001","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8"} I0123 03:45:40.197309 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1248 I0123 03:45:40.197632 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 to node k8s-agentpool-27089192-vmss000001 (vmState Succeeded). I0123 03:45:40.197666 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 to node k8s-agentpool-27089192-vmss000001 I0123 03:45:40.197704 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 lun 0 to node k8s-agentpool-27089192-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 false 0})] I0123 03:45:40.197745 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 false 0})]) I0123 03:45:40.361665 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:45:50.481168 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000001) successfully I0123 03:45:50.481247 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000001) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:45:50.481289 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 attached to node k8s-agentpool-27089192-vmss000001. I0123 03:45:50.481306 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8 to node k8s-agentpool-27089192-vmss000001 successfully I0123 03:45:50.481357 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=10.283725209 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8" node="k8s-agentpool-27089192-vmss000001" result_code="succeeded" I0123 03:45:50.481375 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 18 lines ... I0123 03:46:07.861829 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool-27089192-vmss000000","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-103933bf-c251-483d-8e42-1dc7c1a593c8","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8"} I0123 03:46:07.927298 1 util.go:124] Send.sendRequest got response with ContentLength -1, StatusCode 200 and responseBody length 1256 I0123 03:46:07.927768 1 controllerserver.go:383] GetDiskLun returned: cannot find Lun for disk pvc-103933bf-c251-483d-8e42-1dc7c1a593c8. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 to node k8s-agentpool-27089192-vmss000000 (vmState Succeeded). I0123 03:46:07.927798 1 controllerserver.go:408] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 to node k8s-agentpool-27089192-vmss000000 I0123 03:46:07.927834 1 azure_controller_common.go:255] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 lun 1 to node k8s-agentpool-27089192-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 false 1})] I0123 03:46:07.927880 1 azure_controller_vmss.go:110] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 false 1})]) I0123 03:46:08.092841 1 azure_controller_vmss.go:122] azureDisk - update(kubetest-oduib2ov): vm(k8s-agentpool-27089192-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-oduib2ov/providers/microsoft.compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0123 03:46:23.279942 1 azure_vmss_cache.go:275] DeleteCacheForNode(kubetest-oduib2ov, k8s-agentpool-27089192-vmss, k8s-agentpool-27089192-vmss000000) successfully I0123 03:46:23.279991 1 azure_vmss_cache.go:313] updateCache(k8s-agentpool-27089192-vmss, kubetest-oduib2ov, k8s-agentpool-27089192-vmss000000) for cacheKey(kubetest-oduib2ov/k8s-agentpool-27089192-vmss) updated successfully I0123 03:46:23.280048 1 controllerserver.go:413] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 attached to node k8s-agentpool-27089192-vmss000000. I0123 03:46:23.280069 1 controllerserver.go:433] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8 to node k8s-agentpool-27089192-vmss000000 successfully I0123 03:46:23.280150 1 azure_metrics.go:115] "Observed Request Latency" latency_seconds=15.352352648 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-oduib2ov" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-103933bf-c251-483d-8e42-1dc7c1a593c8" node="k8s-agentpool-27089192-vmss000000" result_code="succeeded" I0123 03:46:23.280217 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} ... skipping 13 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0123 02:47:19.490372 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f e2e-test I0123 02:47:19.490931 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:47:19.517518 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:47:19.517563 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:47:19.517572 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:47:19.517595 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:47:19.518439 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:47:19.518476 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:47:19.518499 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 201 lines ... I0123 03:42:56.812248 1 utils.go:84] GRPC response: {} I0123 03:42:56.837546 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0123 03:42:56.837567 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607"} I0123 03:42:56.837657 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 I0123 03:42:56.837679 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0123 03:42:56.837692 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 I0123 03:42:56.839856 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 I0123 03:42:56.839868 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607" I0123 03:42:56.839951 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 successfully I0123 03:42:56.839964 1 utils.go:84] GRPC response: {} I0123 03:45:56.041343 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 03:45:56.041376 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-a4f7d583-3fd8-4c50-8944-929fbd9e07f8"} I0123 03:45:57.844259 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 33 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0123 02:47:21.362096 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f e2e-test I0123 02:47:21.362568 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:47:21.383415 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:47:21.383436 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:47:21.383444 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:47:21.383501 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:47:21.384245 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:47:21.384306 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:47:21.384337 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 188 lines ... I0123 02:52:48.126878 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0123 02:52:48.141299 1 mount_linux.go:570] Output: "" I0123 02:52:48.141324 1 mount_linux.go:529] Disk "/dev/disk/azure/scsi1/lun0" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/disk/azure/scsi1/lun0] I0123 02:52:48.631747 1 mount_linux.go:539] Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount I0123 02:52:48.631780 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount I0123 02:52:48.631804 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount) E0123 02:52:48.640804 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0123 02:52:48.640862 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0123 02:52:49.244925 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 02:52:49.244951 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5","csi.storage.k8s.io/pvc/name":"pvc-brz4l","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5"} I0123 02:52:50.972552 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0123 02:52:50.972595 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0123 02:52:50.972956 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount with mount options([invalid mount options]) I0123 02:52:50.972978 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0123 02:52:50.979354 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0123 02:52:50.979379 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0123 02:52:50.994270 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount I0123 02:52:50.994485 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount) E0123 02:52:51.003572 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0123 02:52:51.003627 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0123 02:52:52.094520 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 02:52:52.094546 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5","csi.storage.k8s.io/pvc/name":"pvc-brz4l","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5"} I0123 02:52:53.732747 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0123 02:52:53.732802 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0123 02:52:53.733145 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount with mount options([invalid mount options]) I0123 02:52:53.733166 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0123 02:52:53.741930 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0123 02:52:53.741950 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0123 02:52:53.757350 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount I0123 02:52:53.757415 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount) E0123 02:52:53.766377 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0123 02:52:53.766416 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0123 02:52:55.823423 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 02:52:55.823454 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["invalid","mount","options"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5","csi.storage.k8s.io/pvc/name":"pvc-brz4l","csi.storage.k8s.io/pvc/namespace":"azuredisk-5466","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5"} I0123 02:52:57.462440 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0123 02:52:57.462482 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0123 02:52:57.462869 1 nodeserver.go:157] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount with mount options([invalid mount options]) I0123 02:52:57.462895 1 mount_linux.go:567] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) I0123 02:52:57.472609 1 mount_linux.go:570] Output: "DEVNAME=/dev/disk/azure/scsi1/lun0\nTYPE=ext4\n" I0123 02:52:57.472632 1 mount_linux.go:453] Checking for issues with fsck on disk: /dev/disk/azure/scsi1/lun0 I0123 02:52:57.486122 1 mount_linux.go:557] Attempting to mount disk /dev/disk/azure/scsi1/lun0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount I0123 02:52:57.486154 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount) E0123 02:52:57.494654 1 mount_linux.go:232] Mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. E0123 02:52:57.494695 1 utils.go:82] GRPC error: rpc error: code = Internal desc = could not format /dev/disk/azure/scsi1/lun0(lun: 0), and mount it at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount, failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t ext4 -o invalid,mount,options,defaults /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62a0ed12-d971-4974-aabf-b44d4f2d9da5/globalmount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error. I0123 02:55:14.301586 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 02:55:14.301608 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-16089001-53d1-4708-9415-b3d9a8d37f8a","csi.storage.k8s.io/pvc/name":"pvc-k57mj","csi.storage.k8s.io/pvc/namespace":"azuredisk-2790","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a"} I0123 02:55:15.962263 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0123 02:55:15.962310 1 nodeserver.go:116] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_ZRS I0123 02:55:15.962326 1 utils.go:84] GRPC response: {} I0123 02:55:15.972839 1 utils.go:77] GRPC call: /csi.v1.Node/NodePublishVolume ... skipping 16 lines ... I0123 02:55:20.023701 1 utils.go:84] GRPC response: {} I0123 02:55:20.062306 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0123 02:55:20.062328 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a"} I0123 02:55:20.062409 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a I0123 02:55:20.062428 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0123 02:55:20.062438 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a I0123 02:55:20.064493 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a I0123 02:55:20.064507 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a" I0123 02:55:20.064597 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-16089001-53d1-4708-9415-b3d9a8d37f8a successfully I0123 02:55:20.064612 1 utils.go:84] GRPC response: {} I0123 02:57:12.054440 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 02:57:12.054463 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931","csi.storage.k8s.io/pvc/name":"pvc-62btq","csi.storage.k8s.io/pvc/namespace":"azuredisk-5356","requestedsizegib":"10","resourceGroup":"azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/azuredisk-csi-driver-test-79590c23-9ac9-11ed-95e5-36a1f62e17f0/providers/Microsoft.Compute/disks/pvc-a7fc0db2-a3eb-40e8-bbe9-b6fce634d931"} I0123 02:57:13.685595 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 648 lines ... I0123 03:21:09.151501 1 utils.go:84] GRPC response: {} I0123 03:21:09.213108 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0123 03:21:09.213134 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6"} I0123 03:21:09.213217 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 I0123 03:21:09.213242 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0123 03:21:09.213256 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 I0123 03:21:09.215529 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 I0123 03:21:09.215540 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6" I0123 03:21:09.215619 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c6bace4c-0104-40b1-a5e9-1ef0c48084c6 successfully I0123 03:21:09.215631 1 utils.go:84] GRPC response: {} I0123 03:23:06.330847 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 03:23:06.330871 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94","csi.storage.k8s.io/pvc/name":"pvc-b2nzg","csi.storage.k8s.io/pvc/namespace":"azuredisk-8582","requestedsizegib":"10","skuName":"StandardSSD_ZRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-44e817bf-3bc0-4f2c-9f89-2638b3471b94"} I0123 03:23:07.931909 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 579 lines ... I0123 03:42:57.227694 1 utils.go:84] GRPC response: {} I0123 03:42:57.271197 1 utils.go:77] GRPC call: /csi.v1.Node/NodeUnstageVolume I0123 03:42:57.271217 1 utils.go:78] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607"} I0123 03:42:57.271267 1 nodeserver.go:201] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 I0123 03:42:57.271296 1 mount_helper_common.go:93] unmounting "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607" (corruptedMount: false, mounterCanSkipMountPointChecks: true) I0123 03:42:57.271308 1 mount_linux.go:362] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 I0123 03:42:57.273334 1 mount_linux.go:375] ignoring 'not mounted' error for /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 I0123 03:42:57.273345 1 mount_helper_common.go:150] Warning: deleting path "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607" I0123 03:42:57.273402 1 nodeserver.go:206] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-ae9fb660-9b31-4f44-9269-dc8c94463607 successfully I0123 03:42:57.273413 1 utils.go:84] GRPC response: {} I0123 03:44:23.169332 1 utils.go:77] GRPC call: /csi.v1.Node/NodeStageVolume I0123 03:44:23.169357 1 utils.go:78] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["barrier=1","acl"]}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-87476c85-d959-44b0-b000-ee17b2b64b6e","csi.storage.k8s.io/pvc/name":"pvc-g8sgd","csi.storage.k8s.io/pvc/namespace":"azuredisk-1092","device-setting/device/queue_depth":"17","device-setting/queue/max_sectors_kb":"211","device-setting/queue/nr_requests":"44","device-setting/queue/read_ahead_kb":"256","device-setting/queue/rotational":"0","device-setting/queue/scheduler":"none","device-setting/queue/wbt_lat_usec":"0","perfProfile":"advanced","requestedsizegib":"10","skuname":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1674442044159-8081-disk.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-oduib2ov/providers/Microsoft.Compute/disks/pvc-87476c85-d959-44b0-b000-ee17b2b64b6e"} I0123 03:44:24.842265 1 azure_common_linux.go:185] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ ... skipping 100 lines ... Platform: linux/amd64 Topology Key: topology.disk.csi.azure.com/zone Streaming logs below: I0123 02:47:13.989794 1 azuredisk.go:175] driver userAgent: disk.csi.azure.com/v1.27.0-40b4dae4d1048ba3257f4c772609c4e0a0744e0f e2e-test I0123 02:47:13.990376 1 azure_disk_utils.go:162] reading cloud config from secret kube-system/azure-cloud-provider I0123 02:47:14.031444 1 azure_disk_utils.go:169] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0123 02:47:14.031467 1 azure_disk_utils.go:174] could not read cloud config from secret kube-system/azure-cloud-provider I0123 02:47:14.031476 1 azure_disk_utils.go:184] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0123 02:47:14.031504 1 azure_disk_utils.go:192] read cloud config from file: /etc/kubernetes/azure.json successfully I0123 02:47:14.032553 1 azure_auth.go:253] Using AzurePublicCloud environment I0123 02:47:14.032722 1 azure_auth.go:138] azure: using client_id+client_secret to retrieve access token I0123 02:47:14.032809 1 azure.go:776] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 ... skipping 137 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 36206 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 4787 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 751 lines ... cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="300"} 56 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="600"} 56 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="1200"} 56 cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="+Inf"} 56 cloudprovider_azure_op_duration_seconds_sum{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 850.1080248070002 cloudprovider_azure_op_duration_seconds_count{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 56 # HELP cloudprovider_azure_op_failure_count [ALPHA] Number of failed Azure service operations # TYPE cloudprovider_azure_op_failure_count counter cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_delete_volume",resource_group="kubetest-oduib2ov",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 1 # HELP disabled_metric_total [ALPHA] The count of disabled metrics. # TYPE disabled_metric_total counter disabled_metric_total 0 # HELP go_cgo_go_to_c_calls_calls_total Count of calls made from Go to C by the current process. ... skipping 67 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 63106 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 50430 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 272 lines ... [AfterSuite] [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:165[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mDynamic Provisioning [38;5;243m[multi-az] [0m[38;5;9m[1m[It] should create a pod, write to its pv, take a volume snapshot, overwrite data in original pv, create another pod from the snapshot, and read unaltered original data from original pv[disk.csi.azure.com][0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:823[0m [38;5;9m[1mRan 26 of 66 Specs in 4306.852 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m40 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (4306.85s) FAIL FAIL sigs.k8s.io/azuredisk-csi-driver/test/e2e 4306.924s FAIL make: *** [Makefile:261: e2e-test] Error 1 2023/01/23 03:47:21 process.go:155: Step 'make e2e-test' finished in 1h13m26.863163154s 2023/01/23 03:47:21 aksengine_helpers.go:425: downloading /root/tmp1802577493/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/23 03:47:21 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/23 03:47:21 process.go:153: Running: chmod +x /root/tmp1802577493/log-dump.sh 2023/01/23 03:47:21 process.go:155: Step 'chmod +x /root/tmp1802577493/log-dump.sh' finished in 3.200594ms 2023/01/23 03:47:21 aksengine_helpers.go:425: downloading /root/tmp1802577493/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 69 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/23 03:47:56 process.go:155: Step 'bash -c /root/tmp1802577493/win-ci-logs-collector.sh kubetest-oduib2ov.westus2.cloudapp.azure.com /root/tmp1802577493 /root/.ssh/id_rsa' finished in 3.604999ms 2023/01/23 03:47:56 aksengine.go:1141: Deleting resource group: kubetest-oduib2ov. 2023/01/23 03:54:59 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/23 03:54:59 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/23 03:54:59 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 261.629227ms 2023/01/23 03:54:59 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker f1c26c9890f0 ... skipping 4 lines ...