Recent runs || View in Spyglass
PR | andyzhangx: chore: switch master branch to use v1.19.0 |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h21m |
Revision | 91694ad3fc47b5482aeb80f7f3b9511c64a061da |
Refs |
1329 |
job-version | v1.25.0-alpha.0.477+9d85e18ec0dc09 |
kubetest-version | |
revision | v1.25.0-alpha.0.477+9d85e18ec0dc09 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 222 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11156 100 11156 0 0 205k 0 --:--:-- --:--:-- --:--:-- 205k Downloading https://get.helm.sh/helm-v3.8.2-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 not found: manifest unknown: manifest tagged by "v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=9480cc27b0ee3e0de9a15e6967f197e793523987 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2022-05-13T08:10:20Z -extldflags "-static"" -mod vendor -o _output/amd64/azurediskplugin.exe ./pkg/azurediskplugin docker buildx rm container-builder || true error: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1605 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 861 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 314 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 46 lines ... May 13 08:20:04.890: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com8pl68] to have phase Bound May 13 08:20:05.002: INFO: PersistentVolumeClaim test.csi.azure.com8pl68 found but phase is Pending instead of Bound. May 13 08:20:07.111: INFO: PersistentVolumeClaim test.csi.azure.com8pl68 found but phase is Pending instead of Bound. May 13 08:20:09.220: INFO: PersistentVolumeClaim test.csi.azure.com8pl68 found and phase=Bound (4.329789081s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-7t9h [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 08:20:09.560: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-7t9h" in namespace "volume-4983" to be "Succeeded or Failed" May 13 08:20:09.667: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 107.517651ms May 13 08:20:11.779: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219153497s May 13 08:20:13.888: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328081656s May 13 08:20:15.999: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439048125s May 13 08:20:18.107: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54710071s May 13 08:20:20.215: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655095558s May 13 08:20:22.323: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.763005789s May 13 08:20:24.431: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871005693s May 13 08:20:26.539: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 16.978617635s May 13 08:20:28.647: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Pending", Reason="", readiness=false. Elapsed: 19.087177799s May 13 08:20:30.756: INFO: Pod "exec-volume-test-dynamicpv-7t9h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.196007348s [1mSTEP[0m: Saw pod success May 13 08:20:30.756: INFO: Pod "exec-volume-test-dynamicpv-7t9h" satisfied condition "Succeeded or Failed" May 13 08:20:30.864: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000000 pod exec-volume-test-dynamicpv-7t9h container exec-container-dynamicpv-7t9h: <nil> [1mSTEP[0m: delete the pod May 13 08:20:31.111: INFO: Waiting for pod exec-volume-test-dynamicpv-7t9h to disappear May 13 08:20:31.218: INFO: Pod exec-volume-test-dynamicpv-7t9h no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-7t9h May 13 08:20:31.218: INFO: Deleting pod "exec-volume-test-dynamicpv-7t9h" in namespace "volume-4983" ... skipping 21 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext3)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":33,"completed":1,"skipped":28,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support creating multiple subpath from same volumes [Slow][0m [37mtest/e2e/storage/testsuites/subpath.go:296[0m ... skipping 20 lines ... May 13 08:20:04.583: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrb2lm] to have phase Bound May 13 08:20:04.690: INFO: PersistentVolumeClaim test.csi.azure.comrb2lm found but phase is Pending instead of Bound. May 13 08:20:06.799: INFO: PersistentVolumeClaim test.csi.azure.comrb2lm found but phase is Pending instead of Bound. May 13 08:20:08.907: INFO: PersistentVolumeClaim test.csi.azure.comrb2lm found and phase=Bound (4.323365631s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-m9kf [1mSTEP[0m: Creating a pod to test multi_subpath May 13 08:20:09.231: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-m9kf" in namespace "provisioning-8916" to be "Succeeded or Failed" May 13 08:20:09.346: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 115.475481ms May 13 08:20:11.456: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225310509s May 13 08:20:13.565: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334430191s May 13 08:20:15.673: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442542857s May 13 08:20:17.783: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551785435s May 13 08:20:19.890: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659187152s May 13 08:20:21.998: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.767156532s May 13 08:20:24.106: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.874838379s May 13 08:20:26.215: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.983862908s May 13 08:20:28.325: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.093786734s May 13 08:20:30.439: INFO: Pod "pod-subpath-test-dynamicpv-m9kf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.207902579s [1mSTEP[0m: Saw pod success May 13 08:20:30.439: INFO: Pod "pod-subpath-test-dynamicpv-m9kf" satisfied condition "Succeeded or Failed" May 13 08:20:30.546: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000002 pod pod-subpath-test-dynamicpv-m9kf container test-container-subpath-dynamicpv-m9kf: <nil> [1mSTEP[0m: delete the pod May 13 08:20:31.135: INFO: Waiting for pod pod-subpath-test-dynamicpv-m9kf to disappear May 13 08:20:31.245: INFO: Pod pod-subpath-test-dynamicpv-m9kf no longer exists [1mSTEP[0m: Deleting pod May 13 08:20:31.245: INFO: Deleting pod "pod-subpath-test-dynamicpv-m9kf" in namespace "provisioning-8916" ... skipping 23 lines ... [90mtest/e2e/storage/framework/testsuite.go:50[0m should support creating multiple subpath from same volumes [Slow] [90mtest/e2e/storage/testsuites/subpath.go:296[0m [90m------------------------------[0m [36mS[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":34,"completed":1,"skipped":52,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] volumeLimits[0m [1mshould verify that all csinodes have volume limits[0m [37mtest/e2e/storage/testsuites/volumelimits.go:249[0m ... skipping 16 lines ... test/e2e/framework/framework.go:188 May 13 08:21:14.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-1040" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":33,"completed":2,"skipped":82,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand test/e2e/storage/framework/testsuite.go:51 May 13 08:21:14.429: INFO: Driver "test.csi.azure.com" does not support volume expansion - skipping ... skipping 55 lines ... [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 May 13 08:20:04.736: INFO: Creating resource for dynamic PV May 13 08:20:04.736: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass snapshotting-2132-e2e-scxh7tx [1mSTEP[0m: [init] starting a pod to use the claim May 13 08:20:04.970: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-4f2dg" in namespace "snapshotting-2132" to be "Succeeded or Failed" May 13 08:20:05.076: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 106.604389ms May 13 08:20:07.186: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215664242s May 13 08:20:09.294: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324376599s May 13 08:20:11.402: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432566518s May 13 08:20:13.511: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541316443s May 13 08:20:15.620: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650128162s ... skipping 7 lines ... May 13 08:20:32.504: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 27.533790578s May 13 08:20:34.612: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 29.641900566s May 13 08:20:36.719: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 31.749165479s May 13 08:20:38.827: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 33.857261413s May 13 08:20:40.940: INFO: Pod "pvc-snapshottable-tester-4f2dg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.970073005s [1mSTEP[0m: Saw pod success May 13 08:20:40.940: INFO: Pod "pvc-snapshottable-tester-4f2dg" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim [1mSTEP[0m: creating a SnapshotClass [1mSTEP[0m: creating a dynamic VolumeSnapshot May 13 08:20:41.377: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-m2nh8 to become ready May 13 08:20:41.485: INFO: VolumeSnapshot snapshot-m2nh8 found but is not ready. May 13 08:20:43.593: INFO: VolumeSnapshot snapshot-m2nh8 found but is not ready. ... skipping 49 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works, check deletion (ephemeral) [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":28,"completed":1,"skipped":60,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:209[0m ... skipping 200 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":38,"completed":1,"skipped":42,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works, check deletion (ephemeral)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:177[0m ... skipping 13 lines ... [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 May 13 08:20:04.714: INFO: Creating resource for dynamic PV May 13 08:20:04.714: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass snapshotting-1442-e2e-scv4t6g [1mSTEP[0m: [init] starting a pod to use the claim May 13 08:20:04.945: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-t6cqf" in namespace "snapshotting-1442" to be "Succeeded or Failed" May 13 08:20:05.052: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 106.997551ms May 13 08:20:07.164: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219460327s May 13 08:20:09.276: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331125496s May 13 08:20:11.384: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439551069s May 13 08:20:13.492: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547573503s May 13 08:20:15.601: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656001484s May 13 08:20:17.709: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.764174995s May 13 08:20:19.818: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.872798612s May 13 08:20:21.928: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.982705872s May 13 08:20:24.036: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.091164815s May 13 08:20:26.145: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.200578549s May 13 08:20:28.254: INFO: Pod "pvc-snapshottable-tester-t6cqf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.309626199s [1mSTEP[0m: Saw pod success May 13 08:20:28.255: INFO: Pod "pvc-snapshottable-tester-t6cqf" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim [1mSTEP[0m: creating a SnapshotClass [1mSTEP[0m: creating a dynamic VolumeSnapshot May 13 08:20:28.690: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-kdk9j to become ready May 13 08:20:28.800: INFO: VolumeSnapshot snapshot-kdk9j found but is not ready. May 13 08:20:30.909: INFO: VolumeSnapshot snapshot-kdk9j found but is not ready. ... skipping 40 lines ... May 13 08:21:56.822: INFO: volumesnapshotcontents snapcontent-ab986354-0962-4646-a61f-b66e57446c04 has been found and is not deleted May 13 08:21:57.930: INFO: volumesnapshotcontents snapcontent-ab986354-0962-4646-a61f-b66e57446c04 has been found and is not deleted May 13 08:21:59.038: INFO: volumesnapshotcontents snapcontent-ab986354-0962-4646-a61f-b66e57446c04 has been found and is not deleted May 13 08:22:00.147: INFO: volumesnapshotcontents snapcontent-ab986354-0962-4646-a61f-b66e57446c04 has been found and is not deleted May 13 08:22:01.255: INFO: volumesnapshotcontents snapcontent-ab986354-0962-4646-a61f-b66e57446c04 has been found and is not deleted May 13 08:22:02.363: INFO: volumesnapshotcontents snapcontent-ab986354-0962-4646-a61f-b66e57446c04 has been found and is not deleted May 13 08:22:03.363: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 08:22:03.495: INFO: Pod restored-pvc-tester-hpdrx has the following logs: May 13 08:22:03.495: INFO: Deleting pod "restored-pvc-tester-hpdrx" in namespace "snapshotting-1442" May 13 08:22:03.604: INFO: Wait up to 5m0s for pod "restored-pvc-tester-hpdrx" to be fully deleted May 13 08:22:35.819: INFO: deleting snapshot "snapshotting-1442"/"snapshot-kdk9j" ... skipping 26 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works, check deletion (ephemeral) [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":32,"completed":1,"skipped":34,"failed":0} [BeforeEach] [Testpattern: Inline-volume (xfs)][Slow] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:22:43.682: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping [AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes test/e2e/framework/framework.go:188 ... skipping 156 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":34,"completed":1,"skipped":161,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:24:26.346: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 13 lines ... test/e2e/storage/external/external.go:262 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:242[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 08:22:39.844: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:242 May 13 08:22:40.600: INFO: Creating resource for dynamic PV May 13 08:22:40.600: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-5900-e2e-sclb2zq [1mSTEP[0m: creating a claim May 13 08:22:40.708: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 08:22:40.818: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comdsnlj] to have phase Bound May 13 08:22:40.926: INFO: PersistentVolumeClaim test.csi.azure.comdsnlj found but phase is Pending instead of Bound. May 13 08:22:43.037: INFO: PersistentVolumeClaim test.csi.azure.comdsnlj found but phase is Pending instead of Bound. May 13 08:22:45.169: INFO: PersistentVolumeClaim test.csi.azure.comdsnlj found and phase=Bound (4.350584731s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xxtx [1mSTEP[0m: Checking for subpath error in container status May 13 08:23:51.714: INFO: Deleting pod "pod-subpath-test-dynamicpv-xxtx" in namespace "provisioning-5900" May 13 08:23:51.824: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-xxtx" to be fully deleted [1mSTEP[0m: Deleting pod May 13 08:23:54.042: INFO: Deleting pod "pod-subpath-test-dynamicpv-xxtx" in namespace "provisioning-5900" [1mSTEP[0m: Deleting pvc May 13 08:23:54.150: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comdsnlj" ... skipping 22 lines ... [32m• [SLOW TEST:146.602 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath directory is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:242[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":38,"completed":2,"skipped":78,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning ... skipping 155 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":38,"completed":1,"skipped":79,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:25:08.252: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 141 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":28,"completed":2,"skipped":101,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] ... skipping 240 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 112 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition","total":34,"completed":2,"skipped":64,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-stress test/e2e/storage/framework/testsuite.go:51 May 13 08:26:29.489: INFO: Driver test.csi.azure.com doesn't specify stress test options -- skipping ... skipping 92 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support restarting containers using file as subpath [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:333[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":32,"completed":2,"skipped":115,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:26:41.535: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 213 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":34,"completed":2,"skipped":214,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 97 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":38,"completed":2,"skipped":135,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:28:25.248: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 220 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":28,"completed":3,"skipped":721,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning test/e2e/storage/framework/testsuite.go:51 May 13 08:28:50.604: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 63 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits [90mtest/e2e/storage/framework/testsuite.go:50[0m should support volume limits [Serial] [90mtest/e2e/storage/testsuites/volumelimits.go:127[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]","total":33,"completed":3,"skipped":253,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:29:02.902: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 101 lines ... May 13 08:28:06.336: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.commc8lg] to have phase Bound May 13 08:28:06.444: INFO: PersistentVolumeClaim test.csi.azure.commc8lg found but phase is Pending instead of Bound. May 13 08:28:08.553: INFO: PersistentVolumeClaim test.csi.azure.commc8lg found but phase is Pending instead of Bound. May 13 08:28:10.661: INFO: PersistentVolumeClaim test.csi.azure.commc8lg found and phase=Bound (4.325384474s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-2z96 [1mSTEP[0m: Creating a pod to test subpath May 13 08:28:10.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2z96" in namespace "provisioning-8317" to be "Succeeded or Failed" May 13 08:28:11.107: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 108.161989ms May 13 08:28:13.215: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216595834s May 13 08:28:15.332: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332981898s May 13 08:28:17.441: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44253601s May 13 08:28:19.551: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552483776s May 13 08:28:21.662: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663466848s ... skipping 16 lines ... May 13 08:28:57.540: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 46.541367272s May 13 08:28:59.650: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 48.651328745s May 13 08:29:01.759: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 50.760682784s May 13 08:29:03.874: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Pending", Reason="", readiness=false. Elapsed: 52.875756516s May 13 08:29:05.984: INFO: Pod "pod-subpath-test-dynamicpv-2z96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.985822415s [1mSTEP[0m: Saw pod success May 13 08:29:05.984: INFO: Pod "pod-subpath-test-dynamicpv-2z96" satisfied condition "Succeeded or Failed" May 13 08:29:06.094: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000002 pod pod-subpath-test-dynamicpv-2z96 container test-container-volume-dynamicpv-2z96: <nil> [1mSTEP[0m: delete the pod May 13 08:29:06.361: INFO: Waiting for pod pod-subpath-test-dynamicpv-2z96 to disappear May 13 08:29:06.469: INFO: Pod pod-subpath-test-dynamicpv-2z96 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-2z96 May 13 08:29:06.469: INFO: Deleting pod "pod-subpath-test-dynamicpv-2z96" in namespace "provisioning-8317" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90mtest/e2e/storage/testsuites/subpath.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":34,"completed":3,"skipped":243,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:29:48.337: INFO: Distro debian doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/framework/framework.go:188 ... skipping 226 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":38,"completed":3,"skipped":128,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:30:17.383: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 65 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90mtest/e2e/storage/testsuites/ephemeral.go:254[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":38,"completed":3,"skipped":314,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... May 13 08:29:49.358: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.commbq85] to have phase Bound May 13 08:29:49.467: INFO: PersistentVolumeClaim test.csi.azure.commbq85 found but phase is Pending instead of Bound. May 13 08:29:51.577: INFO: PersistentVolumeClaim test.csi.azure.commbq85 found but phase is Pending instead of Bound. May 13 08:29:53.687: INFO: PersistentVolumeClaim test.csi.azure.commbq85 found and phase=Bound (4.329487154s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-5nmd [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 08:29:54.016: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-5nmd" in namespace "volume-9091" to be "Succeeded or Failed" May 13 08:29:54.127: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 110.781741ms May 13 08:29:56.237: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22080024s May 13 08:29:58.348: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332265383s May 13 08:30:00.458: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442288041s May 13 08:30:02.569: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55323853s May 13 08:30:04.679: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662947631s May 13 08:30:06.790: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773717965s May 13 08:30:08.900: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.88360254s May 13 08:30:11.010: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.993652747s May 13 08:30:13.121: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.105017095s May 13 08:30:15.231: INFO: Pod "exec-volume-test-dynamicpv-5nmd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.214871604s [1mSTEP[0m: Saw pod success May 13 08:30:15.231: INFO: Pod "exec-volume-test-dynamicpv-5nmd" satisfied condition "Succeeded or Failed" May 13 08:30:15.340: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod exec-volume-test-dynamicpv-5nmd container exec-container-dynamicpv-5nmd: <nil> [1mSTEP[0m: delete the pod May 13 08:30:15.571: INFO: Waiting for pod exec-volume-test-dynamicpv-5nmd to disappear May 13 08:30:15.680: INFO: Pod exec-volume-test-dynamicpv-5nmd no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-5nmd May 13 08:30:15.680: INFO: Deleting pod "exec-volume-test-dynamicpv-5nmd" in namespace "volume-9091" ... skipping 27 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume","total":34,"completed":4,"skipped":282,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 08:31:28.130: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:194 May 13 08:31:28.895: INFO: Driver didn't provide topology keys -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/framework/framework.go:188 May 13 08:31:28.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "topology-7728" for this suite. [36m[1mS [SKIPPING] [0.990 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (immediate binding)] topology [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement][0m [90mtest/e2e/storage/testsuites/topology.go:194[0m [36mDriver didn't provide topology keys -- skipping[0m test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m ... skipping 86 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":32,"completed":3,"skipped":212,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:31:33.953: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 402 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source in parallel [Slow] [90mtest/e2e/storage/testsuites/provisioning.go:459[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]","total":34,"completed":3,"skipped":73,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] ... skipping 116 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90mtest/e2e/storage/testsuites/ephemeral.go:254[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":33,"completed":4,"skipped":308,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 400 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source in parallel [Slow] [90mtest/e2e/storage/testsuites/provisioning.go:459[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]","total":28,"completed":4,"skipped":823,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould not mount / map unused volumes in a pod [LinuxOnly][0m [37mtest/e2e/storage/testsuites/volumemode.go:354[0m ... skipping 81 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":32,"completed":4,"skipped":275,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:34:03.214: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 38 lines ... May 13 08:31:38.249: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comdjnkp] to have phase Bound May 13 08:31:38.356: INFO: PersistentVolumeClaim test.csi.azure.comdjnkp found but phase is Pending instead of Bound. May 13 08:31:40.464: INFO: PersistentVolumeClaim test.csi.azure.comdjnkp found but phase is Pending instead of Bound. May 13 08:31:42.571: INFO: PersistentVolumeClaim test.csi.azure.comdjnkp found and phase=Bound (4.321295352s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-rjcj [1mSTEP[0m: Creating a pod to test subpath May 13 08:31:42.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rjcj" in namespace "provisioning-1193" to be "Succeeded or Failed" May 13 08:31:43.002: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 106.687708ms May 13 08:31:45.110: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214823166s May 13 08:31:47.218: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32248551s May 13 08:31:49.328: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432002585s May 13 08:31:51.436: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540458561s May 13 08:31:53.544: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648227554s ... skipping 9 lines ... May 13 08:32:14.636: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 31.740224605s May 13 08:32:16.744: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.848695485s May 13 08:32:18.851: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 35.955923802s May 13 08:32:20.964: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Pending", Reason="", readiness=false. Elapsed: 38.068010049s May 13 08:32:23.073: INFO: Pod "pod-subpath-test-dynamicpv-rjcj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.177134797s [1mSTEP[0m: Saw pod success May 13 08:32:23.073: INFO: Pod "pod-subpath-test-dynamicpv-rjcj" satisfied condition "Succeeded or Failed" May 13 08:32:23.180: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod pod-subpath-test-dynamicpv-rjcj container test-container-volume-dynamicpv-rjcj: <nil> [1mSTEP[0m: delete the pod May 13 08:32:23.451: INFO: Waiting for pod pod-subpath-test-dynamicpv-rjcj to disappear May 13 08:32:23.558: INFO: Pod pod-subpath-test-dynamicpv-rjcj no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-rjcj May 13 08:32:23.558: INFO: Deleting pod "pod-subpath-test-dynamicpv-rjcj" in namespace "provisioning-1193" ... skipping 41 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90mtest/e2e/storage/testsuites/subpath.go:207[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":34,"completed":4,"skipped":157,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:34:37.355: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 87 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 131 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":33,"completed":5,"skipped":324,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO test/e2e/storage/framework/testsuite.go:51 May 13 08:36:23.565: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 215 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":38,"completed":4,"skipped":357,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral[0m [1mshould create read/write inline ephemeral volume[0m [37mtest/e2e/storage/testsuites/ephemeral.go:196[0m ... skipping 52 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":28,"completed":5,"skipped":828,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS][0m [37mtest/e2e/storage/testsuites/multivolume.go:323[0m ... skipping 108 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":38,"completed":4,"skipped":211,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumeIO test/e2e/storage/framework/testsuite.go:51 May 13 08:36:43.602: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 126 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90mtest/e2e/storage/testsuites/provisioning.go:421[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":32,"completed":5,"skipped":358,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:138[0m ... skipping 224 lines ... May 13 08:37:54.122: INFO: Deleting PersistentVolumeClaim "test.csi.azure.commpljd" May 13 08:37:54.233: INFO: Waiting up to 5m0s for PersistentVolume pvc-56b0b78b-d5da-400e-9089-8852f4254994 to get deleted May 13 08:37:54.340: INFO: PersistentVolume pvc-56b0b78b-d5da-400e-9089-8852f4254994 found and phase=Released (107.632884ms) May 13 08:37:59.452: INFO: PersistentVolume pvc-56b0b78b-d5da-400e-9089-8852f4254994 found and phase=Released (5.219257144s) May 13 08:38:04.560: INFO: PersistentVolume pvc-56b0b78b-d5da-400e-9089-8852f4254994 was removed [1mSTEP[0m: Deleting sc May 13 08:38:04.669: FAIL: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { msg: "persistent Volume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 not deleted by dynamic provisioner: PersistentVolume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 still exists within 5m0s", err: { s: "PersistentVolume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 still exists within 5m0s", ... skipping 38 lines ... May 13 08:38:04.885: INFO: At 2022-05-13 08:32:25 +0000 UTC - event for pod-fd13b4a4-69f1-424f-a0e1-1405c8c29a96: {kubelet k8s-agentpool1-42137015-vmss000002} Killing: Stopping container write-pod May 13 08:38:04.885: INFO: At 2022-05-13 08:32:33 +0000 UTC - event for pod-f72e62a6-1738-41d2-a7ca-1b9b599971f4: {kubelet k8s-agentpool1-42137015-vmss000002} Created: Created container write-pod May 13 08:38:04.885: INFO: At 2022-05-13 08:32:33 +0000 UTC - event for pod-f72e62a6-1738-41d2-a7ca-1b9b599971f4: {kubelet k8s-agentpool1-42137015-vmss000002} Started: Started container write-pod May 13 08:38:04.885: INFO: At 2022-05-13 08:32:33 +0000 UTC - event for pod-f72e62a6-1738-41d2-a7ca-1b9b599971f4: {kubelet k8s-agentpool1-42137015-vmss000002} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" in 335.046885ms May 13 08:38:04.885: INFO: At 2022-05-13 08:32:33 +0000 UTC - event for pod-f72e62a6-1738-41d2-a7ca-1b9b599971f4: {kubelet k8s-agentpool1-42137015-vmss000002} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" May 13 08:38:04.885: INFO: At 2022-05-13 08:32:50 +0000 UTC - event for pod-f72e62a6-1738-41d2-a7ca-1b9b599971f4: {kubelet k8s-agentpool1-42137015-vmss000002} Killing: Stopping container write-pod May 13 08:38:04.885: INFO: At 2022-05-13 08:32:52 +0000 UTC - event for pod-f72e62a6-1738-41d2-a7ca-1b9b599971f4: {kubelet k8s-agentpool1-42137015-vmss000002} FailedKillPod: error killing pod: failed to "KillContainer" for "write-pod" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: 9f2bab3f0d8589a0ac561f6fbcd9f54aaaf30c1a6aa59d7df4d733ca27858a3a" May 13 08:38:04.996: INFO: POD NODE PHASE GRACE CONDITIONS May 13 08:38:04.996: INFO: May 13 08:38:05.211: INFO: Logging node info for node k8s-agentpool1-42137015-vmss000000 May 13 08:38:05.319: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-42137015-vmss000000 c03aa51e-dc68-4a00-a6cc-ffa926612b37 11809 0 2022-05-13 08:08:11 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D8s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-mfxpbga4 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-42137015-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D8s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:0 topology.test.csi.azure.com/zone:] map[csi.volume.kubernetes.io/nodeid:{"test.csi.azure.com":"k8s-agentpool1-42137015-vmss000000"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-13 08:08:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:agentpool":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.azure.com/cluster":{},"f:kubernetes.azure.com/role":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:storageprofile":{},"f:storagetier":{}}}} } {kubectl-label Update v1 2022-05-13 08:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/agent":{}}}} } {kube-controller-manager Update v1 2022-05-13 08:08:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {cloud-controller-manager Update v1 2022-05-13 08:09:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2022-05-13 08:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-13 08:37:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.test.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{31036686336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{33672699904 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{27933017657 0} {<nil>} 27933017657 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32886267904 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 08:09:33 +0000 UTC,LastTransitionTime:2022-05-13 08:09:33 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 08:37:59 +0000 UTC,LastTransitionTime:2022-05-13 08:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 08:37:59 +0000 UTC,LastTransitionTime:2022-05-13 08:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 08:37:59 +0000 UTC,LastTransitionTime:2022-05-13 08:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 08:37:59 +0000 UTC,LastTransitionTime:2022-05-13 08:08:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool1-42137015-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e87296a86ffe4d84931214c2f6c7f313,SystemUUID:60d69116-6d57-4c4d-9109-e24087d01ef8,BootID:f71395b9-539b-455a-97ed-db5b08355b9d,KernelVersion:5.4.0-1074-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:docker://20.10.11+azure-3,KubeletVersion:v1.23.6,KubeProxyVersion:v1.23.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253346057,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:423eb6cf602c064c8b2deefead5ceadd6324ed41b3d995dab5d0f6f0f4d4710f mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.10.0],SizeBytes:245959792,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi@sha256:9e2ecabcf9dd9943e6600eb9fb460f45b4dc61af7cabe95d115082a029db2aaf mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.9.0],SizeBytes:230470852,},ContainerImage{Names:[k8sprow.azurecr.io/azuredisk-csi@sha256:ff2d389a206cbae509b50d968a9f033a880028e550b78f6d3d0434e4ba63de64 k8sprow.azurecr.io/azuredisk-csi:v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987],SizeBytes:220746716,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/azure-npm@sha256:106f669f48e5e80c4ec0afb49858ead72cf4b901cd8664e7bf81f8d789e56e12 mcr.microsoft.com/containernetworking/azure-npm:v1.2.2_hotfix],SizeBytes:175230380,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver@sha256:c0d040a1c4fbfceb65663e31c09ea40f4f78e356437610cbc3fbb4bb409bd6f1 mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver:v0.0.19],SizeBytes:123229697,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/kube-proxy@sha256:f1ed19f70c6ce21088706fd879ae59f3651806dacb8c2fe971a8f717e13118d6 mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.23.6],SizeBytes:112316864,},ContainerImage{Names:[mcr.microsoft.com/oss/azure/secrets-store/provider-azure@sha256:6f67f3d0c7cdde5702f8ce7f101b6519daa0237f0c34fecb7c058b6af8c22ad1 mcr.microsoft.com/oss/azure/secrets-store/provider-azure:0.0.12],SizeBytes:101061355,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-autoscaler@sha256:6f0c680d375c62e74351f8ff3ed6ddb9b72ca759e0645c329b95f64264654a6d mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-autoscaler:v1.22.1],SizeBytes:99962810,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/kube-addon-manager@sha256:32e2836018c96e73533bd4642fe438e465b81dcbfa8b7b61935a6f4d0246c7ae mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.1.3],SizeBytes:86832059,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/kube-addon-manager@sha256:92c2c5aad9012ee32d2a43a74966cc0adc6ccb1705ad15abb10485ecf406d88b mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.1.5],SizeBytes:84094027,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/metrics-server@sha256:1ef9d57ce41ffcc328b92494c3bfafe401e0b9a1694a295301a1385337d52815 mcr.microsoft.com/oss/kubernetes/metrics-server:v0.5.2],SizeBytes:64327621,},ContainerImage{Names:[mcr.microsoft.com/oss/nvidia/k8s-device-plugin@sha256:0f5b52bf28239234e831697d96db63ac03cde70fe68058f964504ab7564ee810 mcr.microsoft.com/oss/nvidia/k8s-device-plugin:1.0.0-beta6],SizeBytes:64160241,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:e9ddadc44ba87a4a27f67e54760a14f9986885b534b3dff170a14eae1e35d213 mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.0.0],SizeBytes:56881280,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:c5bb71ceaac60b1a4b58739fa07b709f6248c452ff6272a384d2f7648895a750 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.3.0],SizeBytes:54313772,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:61849a026511cf332c87d73d0a7aed803b510c3ede197ec755389686d490de72 mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v4.2.1],SizeBytes:54210936,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:6b41e7153ebdfdc1501aa65184624bc15fd33a52d93f88ec3a758d0f8c9b8c10 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.3.0],SizeBytes:53842561,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:be5a8dc1f990828f653c77bd0a0f1bbd13197c3019f6e1d99d590389bac36705 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v4.2.1],SizeBytes:51575245,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:0ad67f9919522a07318034641ae09bf2079b417e9944c65914410594ce645468 mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v1.1.4],SizeBytes:51478397,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:31ec4f7daccd3e7a8504e12657d7830651ecacbe4a487daca1b1b7695a64b070 mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v1.23.1],SizeBytes:51249021,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:011712ed90fb8efcf27928b0a47ed04b98baebb31cb1b2d8ab676977ec18eedc mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.1.4],SizeBytes:50868093,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:075ea1f8270312350f1396ab6677251e803e61a523822d5abfa5e6acd180cfab mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.23.11],SizeBytes:50806891,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:0f9a8fbaed65192ed7dd795be4f9c1dc48ebdef0a241fb62d456f4bed40d9875 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.23.1],SizeBytes:50679677,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/ip-masq-agent@sha256:1244155f2ed3f33ff154cc343b8ad285f3391d95afd7d4b1c6dcc420bc0ba3cf mcr.microsoft.com/oss/kubernetes/ip-masq-agent:v2.5.0],SizeBytes:50146762,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:7c907ff70b90a0bdf8fae63bd744018469dd9839cde1fd0515b93e0bbd14b34e mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v1.0.8],SizeBytes:48963453,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:3987d7a8c6922ce1952ee19c5cb6ea75aac7b7c1b07aa79277ad038c69fb7a31 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.0.8],SizeBytes:48349053,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/coredns@sha256:f873bf7f0928461efe10697fa76cf0ad7a1ae3041c5b57b50dd3d0b72d273f8c mcr.microsoft.com/oss/kubernetes/coredns:1.8.6],SizeBytes:46804601,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:8073113a20882642a980b338635cdc5945e5673a18aef192090e6fde2b89a75c mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v0.6.0],SizeBytes:45909032,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:6a32329628bdea3c6d75e98aad6155b65d2e2b98ca616eb33f9ac562912804c6 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v0.6.0],SizeBytes:45229096,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager@sha256:ef6c4ba564b4d11d270f7d1563c50fbeb30ccc3b94146e5059228c49f95875f5 mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager:v0.7.11],SizeBytes:44916605,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager@sha256:dbcc384758ba5ca6d249596d471292ed3785e31cdb854d48b84d70794b669b4c mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v0.7.11],SizeBytes:43679613,},ContainerImage{Names:[mcr.microsoft.com/oss/etcd-io/etcd@sha256:cf587862e3f1b6fa4d9a2565520a34f164bdf72c50f37af8c3c668160593246e mcr.microsoft.com/oss/etcd-io/etcd:v3.3.25],SizeBytes:41832119,},ContainerImage{Names:[mcr.microsoft.com/k8s/aad-pod-identity/mic@sha256:bd9465be94966b9a917e1e3904fa5e63dd91772ccadf304e18ffd8e4ad8ccedd mcr.microsoft.com/k8s/aad-pod-identity/mic:1.6.1],SizeBytes:41374894,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler@sha256:c849d75d61943ce7f51b4c049f1a79d19a08253966c8f49c4cfb6414cc33db8b mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5],SizeBytes:40661903,},ContainerImage{Names:[mcr.microsoft.com/k8s/aad-pod-identity/nmi@sha256:02128fefcdb7593ac53fc342e2c53a0fc6fabd813036bf60457bf43cc2940116 mcr.microsoft.com/k8s/aad-pod-identity/nmi:1.6.1],SizeBytes:38007982,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:921f301c44dda06a325164accf22e78ecc570b5c7d9d6ee4c66bd8cbb2b60b9a mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.16],SizeBytes:26970670,},ContainerImage{Names:[mcr.microsoft.com/k8s/kms/keyvault@sha256:1a27e175f8c125209e32d2957b5509fe20757bd8cb309ff9da598799b56326fb mcr.microsoft.com/k8s/kms/keyvault:v0.0.10],SizeBytes:23077387,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:348b2d4eebc8da38687755a69b6c21035be232325a6bcde54e5ec4e04689fd93 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.0],SizeBytes:19581025,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:dbec3a8166686b09b242176ab5b99e993da4126438bbce68147c3fd654f35662 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.4.0],SizeBytes:19547289,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:e01f5dae19d7e1be536606fe5deb893417429486b628b816d80ffa0e441eeae8 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.6.0],SizeBytes:17614587,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:c96a6255c42766f6b8bb1a7cda02b0060ab1b20b2e2dafcc64ec09e7646745a6 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0],SizeBytes:17573341,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16028126,},ContainerImage{Names:[mcr.microsoft.com/oss/busybox/busybox@sha256:582a641242b49809af3a1a522f9aae8c3f047d1c6ca1dd9d8cdabd349e45b1a9 mcr.microsoft.com/oss/busybox/busybox:1.33.1],SizeBytes:1235829,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes/pause@sha256:e3b8c20681593c21b344ad801fbb8abaf564427ee3a57a9fcfa3b455f917ce46 mcr.microsoft.com/oss/kubernetes/pause:3.4.1],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/test.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-12bd4656-21a3-40df-a306-91d7aa68b065 kubernetes.io/csi/test.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-169e9d04-e666-4658-8be9-2500f57af672 kubernetes.io/csi/test.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-8e918b0c-8264-49a6-8642-58349961be6c],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 13 08:38:05.320: INFO: ... skipping 112 lines ... [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m [91m[1mshould access to two volumes with the same volume mode and retain data across pod recreation on the same node [Measurement][0m [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [91mMay 13 08:38:04.669: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { msg: "persistent Volume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 not deleted by dynamic provisioner: PersistentVolume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 still exists within 5m0s", err: { s: "PersistentVolume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 still exists within 5m0s", ... skipping 3 lines ... ] persistent Volume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 not deleted by dynamic provisioner: PersistentVolume pvc-3454e852-3c34-4a81-a7f6-b0cedb788239 still exists within 5m0s occurred[0m test/e2e/storage/testsuites/multivolume.go:129 [90m------------------------------[0m {"msg":"FAILED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":34,"completed":4,"skipped":303,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:38:13.910: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 102 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":38,"completed":5,"skipped":435,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:38:19.103: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 239 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":32,"completed":6,"skipped":367,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:40:08.949: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 8 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 266 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":33,"completed":6,"skipped":433,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support file as subpath [LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:232[0m ... skipping 17 lines ... May 13 08:36:44.590: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com6lbz8] to have phase Bound May 13 08:36:44.697: INFO: PersistentVolumeClaim test.csi.azure.com6lbz8 found but phase is Pending instead of Bound. May 13 08:36:46.805: INFO: PersistentVolumeClaim test.csi.azure.com6lbz8 found but phase is Pending instead of Bound. May 13 08:36:48.913: INFO: PersistentVolumeClaim test.csi.azure.com6lbz8 found and phase=Bound (4.323483637s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-rr9m [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 13 08:36:49.239: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rr9m" in namespace "provisioning-3884" to be "Succeeded or Failed" May 13 08:36:49.347: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Pending", Reason="", readiness=false. Elapsed: 108.08536ms May 13 08:36:51.456: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21666189s May 13 08:36:53.565: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325532455s May 13 08:36:55.674: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435225045s May 13 08:36:57.783: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544157227s May 13 08:36:59.892: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652586766s ... skipping 26 lines ... May 13 08:37:56.855: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Running", Reason="", readiness=true. Elapsed: 1m7.615394274s May 13 08:37:58.963: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Running", Reason="", readiness=true. Elapsed: 1m9.723590835s May 13 08:38:01.076: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Running", Reason="", readiness=true. Elapsed: 1m11.836553389s May 13 08:38:03.185: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Running", Reason="", readiness=false. Elapsed: 1m13.94594892s May 13 08:38:05.293: INFO: Pod "pod-subpath-test-dynamicpv-rr9m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m16.053800461s [1mSTEP[0m: Saw pod success May 13 08:38:05.293: INFO: Pod "pod-subpath-test-dynamicpv-rr9m" satisfied condition "Succeeded or Failed" May 13 08:38:05.401: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod pod-subpath-test-dynamicpv-rr9m container test-container-subpath-dynamicpv-rr9m: <nil> [1mSTEP[0m: delete the pod May 13 08:38:05.628: INFO: Waiting for pod pod-subpath-test-dynamicpv-rr9m to disappear May 13 08:38:05.734: INFO: Pod pod-subpath-test-dynamicpv-rr9m no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-rr9m May 13 08:38:05.734: INFO: Deleting pod "pod-subpath-test-dynamicpv-rr9m" in namespace "provisioning-3884" ... skipping 42 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:232[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":38,"completed":5,"skipped":329,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:40:24.650: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 117 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits [90mtest/e2e/storage/framework/testsuite.go:50[0m should support volume limits [Serial] [90mtest/e2e/storage/testsuites/volumelimits.go:127[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]","total":34,"completed":5,"skipped":280,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 160 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":38,"completed":6,"skipped":516,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... May 13 08:40:11.290: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comz594n] to have phase Bound May 13 08:40:11.397: INFO: PersistentVolumeClaim test.csi.azure.comz594n found but phase is Pending instead of Bound. May 13 08:40:13.506: INFO: PersistentVolumeClaim test.csi.azure.comz594n found but phase is Pending instead of Bound. May 13 08:40:15.615: INFO: PersistentVolumeClaim test.csi.azure.comz594n found and phase=Bound (4.324898541s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-tv99 [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 08:40:15.941: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-tv99" in namespace "volume-3326" to be "Succeeded or Failed" May 13 08:40:16.048: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 107.379044ms May 13 08:40:18.157: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216004444s May 13 08:40:20.267: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325989386s May 13 08:40:22.375: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434233624s May 13 08:40:24.484: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543155274s May 13 08:40:26.592: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651391665s May 13 08:40:28.700: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 12.759379762s May 13 08:40:30.811: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Pending", Reason="", readiness=false. Elapsed: 14.87020851s May 13 08:40:32.919: INFO: Pod "exec-volume-test-dynamicpv-tv99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.978536876s [1mSTEP[0m: Saw pod success May 13 08:40:32.919: INFO: Pod "exec-volume-test-dynamicpv-tv99" satisfied condition "Succeeded or Failed" May 13 08:40:33.027: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod exec-volume-test-dynamicpv-tv99 container exec-container-dynamicpv-tv99: <nil> [1mSTEP[0m: delete the pod May 13 08:40:33.276: INFO: Waiting for pod exec-volume-test-dynamicpv-tv99 to disappear May 13 08:40:33.384: INFO: Pod exec-volume-test-dynamicpv-tv99 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-tv99 May 13 08:40:33.384: INFO: Deleting pod "exec-volume-test-dynamicpv-tv99" in namespace "volume-3326" ... skipping 39 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":32,"completed":7,"skipped":819,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-stress test/e2e/storage/framework/testsuite.go:51 May 13 08:42:47.149: INFO: Driver test.csi.azure.com doesn't specify stress test options -- skipping ... skipping 271 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":28,"completed":6,"skipped":841,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 May 13 08:42:53.205: INFO: Driver test.csi.azure.com doesn't specify snapshot stress test options -- skipping ... skipping 34 lines ... test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:280[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 08:40:33.766: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:280 May 13 08:40:34.518: INFO: Creating resource for dynamic PV May 13 08:40:34.518: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-2433-e2e-scmxwjx [1mSTEP[0m: creating a claim May 13 08:40:34.625: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 08:40:34.734: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comv6hp6] to have phase Bound May 13 08:40:34.842: INFO: PersistentVolumeClaim test.csi.azure.comv6hp6 found but phase is Pending instead of Bound. May 13 08:40:36.950: INFO: PersistentVolumeClaim test.csi.azure.comv6hp6 found but phase is Pending instead of Bound. May 13 08:40:39.057: INFO: PersistentVolumeClaim test.csi.azure.comv6hp6 found and phase=Bound (4.322882137s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-4jxb [1mSTEP[0m: Checking for subpath error in container status May 13 08:41:45.598: INFO: Deleting pod "pod-subpath-test-dynamicpv-4jxb" in namespace "provisioning-2433" May 13 08:41:45.709: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-4jxb" to be fully deleted [1mSTEP[0m: Deleting pod May 13 08:41:47.924: INFO: Deleting pod "pod-subpath-test-dynamicpv-4jxb" in namespace "provisioning-2433" [1mSTEP[0m: Deleting pvc May 13 08:41:48.032: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comv6hp6" ... skipping 22 lines ... [32m• [SLOW TEST:146.546 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:280[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":34,"completed":6,"skipped":320,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:138[0m ... skipping 206 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":34,"completed":5,"skipped":372,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support readOnly directory specified in the volumeMount[0m [37mtest/e2e/storage/testsuites/subpath.go:367[0m ... skipping 17 lines ... May 13 08:42:30.294: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comf66br] to have phase Bound May 13 08:42:30.401: INFO: PersistentVolumeClaim test.csi.azure.comf66br found but phase is Pending instead of Bound. May 13 08:42:32.510: INFO: PersistentVolumeClaim test.csi.azure.comf66br found but phase is Pending instead of Bound. May 13 08:42:34.618: INFO: PersistentVolumeClaim test.csi.azure.comf66br found and phase=Bound (4.3242097s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-jgxq [1mSTEP[0m: Creating a pod to test subpath May 13 08:42:34.943: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jgxq" in namespace "provisioning-7560" to be "Succeeded or Failed" May 13 08:42:35.050: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 107.272476ms May 13 08:42:37.159: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215864864s May 13 08:42:39.270: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327294066s May 13 08:42:41.379: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435651146s May 13 08:42:43.487: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544456924s May 13 08:42:45.596: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65298146s ... skipping 25 lines ... May 13 08:43:40.426: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.483209159s May 13 08:43:42.534: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.59151571s May 13 08:43:44.643: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.699667285s May 13 08:43:46.751: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.808209646s May 13 08:43:48.859: INFO: Pod "pod-subpath-test-dynamicpv-jgxq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m13.916295983s [1mSTEP[0m: Saw pod success May 13 08:43:48.859: INFO: Pod "pod-subpath-test-dynamicpv-jgxq" satisfied condition "Succeeded or Failed" May 13 08:43:48.967: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod pod-subpath-test-dynamicpv-jgxq container test-container-subpath-dynamicpv-jgxq: <nil> [1mSTEP[0m: delete the pod May 13 08:43:49.398: INFO: Waiting for pod pod-subpath-test-dynamicpv-jgxq to disappear May 13 08:43:49.505: INFO: Pod pod-subpath-test-dynamicpv-jgxq no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-jgxq May 13 08:43:49.505: INFO: Deleting pod "pod-subpath-test-dynamicpv-jgxq" in namespace "provisioning-7560" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90mtest/e2e/storage/testsuites/subpath.go:367[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":38,"completed":7,"skipped":538,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning test/e2e/storage/framework/testsuite.go:51 May 13 08:44:31.412: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 111 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":34,"completed":6,"skipped":379,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS][0m [37mtest/e2e/storage/testsuites/multivolume.go:323[0m ... skipping 108 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":38,"completed":6,"skipped":368,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral ... skipping 228 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":33,"completed":7,"skipped":434,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:45:28.568: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 161 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":28,"completed":7,"skipped":945,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:46:29.540: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 59 lines ... May 13 08:44:32.404: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com54tg7] to have phase Bound May 13 08:44:32.512: INFO: PersistentVolumeClaim test.csi.azure.com54tg7 found but phase is Pending instead of Bound. May 13 08:44:34.621: INFO: PersistentVolumeClaim test.csi.azure.com54tg7 found but phase is Pending instead of Bound. May 13 08:44:36.729: INFO: PersistentVolumeClaim test.csi.azure.com54tg7 found and phase=Bound (4.32478476s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xcc6 [1mSTEP[0m: Creating a pod to test subpath May 13 08:44:37.057: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xcc6" in namespace "provisioning-8993" to be "Succeeded or Failed" May 13 08:44:37.166: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 108.77258ms May 13 08:44:39.275: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217931473s May 13 08:44:41.386: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328340486s May 13 08:44:43.495: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437614886s May 13 08:44:45.605: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547807103s May 13 08:44:47.714: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656974527s ... skipping 20 lines ... May 13 08:45:31.997: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 54.939600346s May 13 08:45:34.106: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 57.048472087s May 13 08:45:36.213: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 59.156022059s May 13 08:45:38.326: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.268508545s May 13 08:45:40.434: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.376959601s [1mSTEP[0m: Saw pod success May 13 08:45:40.434: INFO: Pod "pod-subpath-test-dynamicpv-xcc6" satisfied condition "Succeeded or Failed" May 13 08:45:40.542: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod pod-subpath-test-dynamicpv-xcc6 container test-container-subpath-dynamicpv-xcc6: <nil> [1mSTEP[0m: delete the pod May 13 08:45:40.788: INFO: Waiting for pod pod-subpath-test-dynamicpv-xcc6 to disappear May 13 08:45:40.894: INFO: Pod pod-subpath-test-dynamicpv-xcc6 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-xcc6 May 13 08:45:40.895: INFO: Deleting pod "pod-subpath-test-dynamicpv-xcc6" in namespace "provisioning-8993" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xcc6 [1mSTEP[0m: Creating a pod to test subpath May 13 08:45:41.114: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xcc6" in namespace "provisioning-8993" to be "Succeeded or Failed" May 13 08:45:41.221: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 107.174529ms May 13 08:45:43.329: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215612177s May 13 08:45:45.438: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323798022s May 13 08:45:47.546: INFO: Pod "pod-subpath-test-dynamicpv-xcc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.432254407s [1mSTEP[0m: Saw pod success May 13 08:45:47.546: INFO: Pod "pod-subpath-test-dynamicpv-xcc6" satisfied condition "Succeeded or Failed" May 13 08:45:47.653: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod pod-subpath-test-dynamicpv-xcc6 container test-container-subpath-dynamicpv-xcc6: <nil> [1mSTEP[0m: delete the pod May 13 08:45:47.878: INFO: Waiting for pod pod-subpath-test-dynamicpv-xcc6 to disappear May 13 08:45:47.985: INFO: Pod pod-subpath-test-dynamicpv-xcc6 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-xcc6 May 13 08:45:47.985: INFO: Deleting pod "pod-subpath-test-dynamicpv-xcc6" in namespace "provisioning-8993" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90mtest/e2e/storage/testsuites/subpath.go:397[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":38,"completed":8,"skipped":629,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:46:29.859: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 145 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90mtest/e2e/storage/testsuites/provisioning.go:421[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":32,"completed":8,"skipped":911,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 08:46:38.408: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 187 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext3)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should store data","total":34,"completed":7,"skipped":332,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 119 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":34,"completed":7,"skipped":408,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support existing single file [LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:221[0m ... skipping 17 lines ... May 13 08:45:29.597: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comqc9rd] to have phase Bound May 13 08:45:29.718: INFO: PersistentVolumeClaim test.csi.azure.comqc9rd found but phase is Pending instead of Bound. May 13 08:45:31.827: INFO: PersistentVolumeClaim test.csi.azure.comqc9rd found but phase is Pending instead of Bound. May 13 08:45:33.936: INFO: PersistentVolumeClaim test.csi.azure.comqc9rd found and phase=Bound (4.338737143s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-cd5q [1mSTEP[0m: Creating a pod to test subpath May 13 08:45:34.261: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cd5q" in namespace "provisioning-7995" to be "Succeeded or Failed" May 13 08:45:34.368: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 107.844399ms May 13 08:45:36.478: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217199849s May 13 08:45:38.586: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32499256s May 13 08:45:40.694: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433071029s May 13 08:45:42.802: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541489071s May 13 08:45:44.911: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650930271s ... skipping 27 lines ... May 13 08:46:43.967: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.706057878s May 13 08:46:46.075: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.814823186s May 13 08:46:48.185: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.924676726s May 13 08:46:50.293: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.0328577s May 13 08:46:52.402: INFO: Pod "pod-subpath-test-dynamicpv-cd5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m18.141288198s [1mSTEP[0m: Saw pod success May 13 08:46:52.402: INFO: Pod "pod-subpath-test-dynamicpv-cd5q" satisfied condition "Succeeded or Failed" May 13 08:46:52.511: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000002 pod pod-subpath-test-dynamicpv-cd5q container test-container-subpath-dynamicpv-cd5q: <nil> [1mSTEP[0m: delete the pod May 13 08:46:52.794: INFO: Waiting for pod pod-subpath-test-dynamicpv-cd5q to disappear May 13 08:46:52.902: INFO: Pod pod-subpath-test-dynamicpv-cd5q no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-cd5q May 13 08:46:52.902: INFO: Deleting pod "pod-subpath-test-dynamicpv-cd5q" in namespace "provisioning-7995" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:221[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":33,"completed":8,"skipped":533,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 104 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":34,"completed":8,"skipped":431,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:49:49.547: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 124 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":28,"completed":8,"skipped":999,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m May 13 08:50:03.709: INFO: Running AfterSuite actions on all nodes May 13 08:50:03.709: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 08:50:03.709: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 106 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":38,"completed":7,"skipped":384,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 May 13 08:50:06.581: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 105 lines ... May 13 08:47:05.081: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 08:47:05.190: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comkkz86] to have phase Bound May 13 08:47:05.298: INFO: PersistentVolumeClaim test.csi.azure.comkkz86 found but phase is Pending instead of Bound. May 13 08:47:07.414: INFO: PersistentVolumeClaim test.csi.azure.comkkz86 found but phase is Pending instead of Bound. May 13 08:47:09.522: INFO: PersistentVolumeClaim test.csi.azure.comkkz86 found and phase=Bound (4.331650016s) [1mSTEP[0m: Creating pod to format volume volume-prep-provisioning-2492 May 13 08:47:09.846: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-2492" in namespace "provisioning-2492" to be "Succeeded or Failed" May 13 08:47:09.954: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 107.694508ms May 13 08:47:12.062: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216096449s May 13 08:47:14.170: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323980802s May 13 08:47:16.277: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431435361s May 13 08:47:18.387: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541312407s May 13 08:47:20.496: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649854258s ... skipping 31 lines ... May 13 08:48:27.967: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.121291138s May 13 08:48:30.075: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.228953214s May 13 08:48:32.184: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.337847111s May 13 08:48:34.292: INFO: Pod "volume-prep-provisioning-2492": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.446061227s May 13 08:48:36.401: INFO: Pod "volume-prep-provisioning-2492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m26.554999351s [1mSTEP[0m: Saw pod success May 13 08:48:36.401: INFO: Pod "volume-prep-provisioning-2492" satisfied condition "Succeeded or Failed" May 13 08:48:36.401: INFO: Deleting pod "volume-prep-provisioning-2492" in namespace "provisioning-2492" May 13 08:48:36.514: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-2492" to be fully deleted [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-d2dv [1mSTEP[0m: Checking for subpath error in container status May 13 08:49:54.949: INFO: Deleting pod "pod-subpath-test-dynamicpv-d2dv" in namespace "provisioning-2492" May 13 08:49:55.057: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-d2dv" to be fully deleted [1mSTEP[0m: Deleting pod May 13 08:49:57.284: INFO: Deleting pod "pod-subpath-test-dynamicpv-d2dv" in namespace "provisioning-2492" [1mSTEP[0m: Deleting pvc May 13 08:49:57.397: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comkkz86" ... skipping 19 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should verify container cannot write to subpath readonly volumes [Slow] [90mtest/e2e/storage/testsuites/subpath.go:425[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":34,"completed":8,"skipped":359,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 188 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":38,"completed":9,"skipped":674,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 114 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data","total":33,"completed":9,"skipped":537,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 98 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":34,"completed":9,"skipped":360,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37mtest/e2e/storage/testsuites/volumemode.go:299[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 08:50:57.231: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] test/e2e/storage/testsuites/volumemode.go:299 May 13 08:50:58.000: INFO: Creating resource for dynamic PV May 13 08:50:58.000: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-8576-e2e-sc9wz7b [1mSTEP[0m: creating a claim May 13 08:50:58.286: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comdf9nl] to have phase Bound May 13 08:50:58.395: INFO: PersistentVolumeClaim test.csi.azure.comdf9nl found but phase is Pending instead of Bound. May 13 08:51:00.510: INFO: PersistentVolumeClaim test.csi.azure.comdf9nl found but phase is Pending instead of Bound. May 13 08:51:02.620: INFO: PersistentVolumeClaim test.csi.azure.comdf9nl found and phase=Bound (4.333704429s) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail May 13 08:51:05.276: INFO: Deleting pod "pod-9b9a2dd3-2678-41c8-a39c-b3916df084ef" in namespace "volumemode-8576" May 13 08:51:05.388: INFO: Wait up to 5m0s for pod "pod-9b9a2dd3-2678-41c8-a39c-b3916df084ef" to be fully deleted [1mSTEP[0m: Deleting pvc May 13 08:51:07.605: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comdf9nl" May 13 08:51:07.715: INFO: Waiting up to 5m0s for PersistentVolume pvc-17967c8f-3b1e-47ad-9f1e-76fa1e7d5368 to get deleted May 13 08:51:07.824: INFO: PersistentVolume pvc-17967c8f-3b1e-47ad-9f1e-76fa1e7d5368 found and phase=Released (108.715924ms) ... skipping 20 lines ... [32m• [SLOW TEST:82.719 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":38,"completed":10,"skipped":677,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral[0m [1mshould create read/write inline ephemeral volume[0m [37mtest/e2e/storage/testsuites/ephemeral.go:196[0m ... skipping 52 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":34,"completed":9,"skipped":491,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:52:23.398: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 164 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":38,"completed":8,"skipped":463,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumeIO test/e2e/storage/framework/testsuite.go:51 May 13 08:52:26.034: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 13 lines ... test/e2e/storage/external/external.go:262 [90m------------------------------[0m [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:269[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 08:52:26.037: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:269 May 13 08:52:26.772: INFO: Creating resource for dynamic PV May 13 08:52:26.772: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-4524-e2e-sccwcwg [1mSTEP[0m: creating a claim May 13 08:52:26.880: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 08:52:26.987: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comf2p5w] to have phase Bound May 13 08:52:27.094: INFO: PersistentVolumeClaim test.csi.azure.comf2p5w found but phase is Pending instead of Bound. May 13 08:52:29.200: INFO: PersistentVolumeClaim test.csi.azure.comf2p5w found but phase is Pending instead of Bound. May 13 08:52:31.306: INFO: PersistentVolumeClaim test.csi.azure.comf2p5w found and phase=Bound (4.318904687s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-jz9b [1mSTEP[0m: Checking for subpath error in container status May 13 08:53:13.834: INFO: Deleting pod "pod-subpath-test-dynamicpv-jz9b" in namespace "provisioning-4524" May 13 08:53:13.942: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-jz9b" to be fully deleted [1mSTEP[0m: Deleting pod May 13 08:53:16.153: INFO: Deleting pod "pod-subpath-test-dynamicpv-jz9b" in namespace "provisioning-4524" [1mSTEP[0m: Deleting pvc May 13 08:53:16.258: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comf2p5w" ... skipping 16 lines ... [32m• [SLOW TEST:91.822 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:269[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":38,"completed":9,"skipped":499,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:248[0m ... skipping 207 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":33,"completed":10,"skipped":622,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:278[0m ... skipping 17 lines ... May 13 08:52:24.429: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comhwxlc] to have phase Bound May 13 08:52:24.536: INFO: PersistentVolumeClaim test.csi.azure.comhwxlc found but phase is Pending instead of Bound. May 13 08:52:26.644: INFO: PersistentVolumeClaim test.csi.azure.comhwxlc found but phase is Pending instead of Bound. May 13 08:52:28.753: INFO: PersistentVolumeClaim test.csi.azure.comhwxlc found and phase=Bound (4.323488523s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 08:52:29.190: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-fn9l7" in namespace "snapshotting-1951" to be "Succeeded or Failed" May 13 08:52:29.297: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 106.973806ms May 13 08:52:31.406: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215972579s May 13 08:52:33.514: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324053052s May 13 08:52:35.622: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431443868s May 13 08:52:37.730: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539581217s May 13 08:52:39.836: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645928789s ... skipping 4 lines ... May 13 08:52:50.378: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.18818219s May 13 08:52:52.487: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.297163339s May 13 08:52:54.596: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.406347387s May 13 08:52:56.705: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.514958338s May 13 08:52:58.812: INFO: Pod "pvc-snapshottable-tester-fn9l7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.622341185s [1mSTEP[0m: Saw pod success May 13 08:52:58.813: INFO: Pod "pvc-snapshottable-tester-fn9l7" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 08:52:58.919: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comhwxlc] to have phase Bound May 13 08:52:59.026: INFO: PersistentVolumeClaim test.csi.azure.comhwxlc found and phase=Bound (107.243329ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 08:52:59.367: INFO: Pod pvc-snapshottable-tester-fn9l7 has the following logs: ... skipping 13 lines ... May 13 08:53:06.753: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-21114d0a-0327-486b-aa4f-e0fbc0cf5216 creationTime:2022-05-13T08:53:02Z readyToUse:true restoreSize:5Gi] May 13 08:53:06.753: INFO: snapshotContentName snapcontent-21114d0a-0327-486b-aa4f-e0fbc0cf5216 [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 08:53:07.181: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-zgtz8" in namespace "snapshotting-1951" to be "Succeeded or Failed" May 13 08:53:07.287: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 105.606159ms May 13 08:53:09.394: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212945274s May 13 08:53:11.507: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326333858s May 13 08:53:13.614: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43246462s May 13 08:53:15.729: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54811132s May 13 08:53:17.836: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655064684s ... skipping 18 lines ... May 13 08:53:57.884: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.702480464s May 13 08:53:59.991: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 52.809774075s May 13 08:54:02.097: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 54.916335224s May 13 08:54:04.212: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Pending", Reason="", readiness=false. Elapsed: 57.030863178s May 13 08:54:06.319: INFO: Pod "pvc-snapshottable-data-tester-zgtz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 59.137484769s [1mSTEP[0m: Saw pod success May 13 08:54:06.319: INFO: Pod "pvc-snapshottable-data-tester-zgtz8" satisfied condition "Succeeded or Failed" May 13 08:54:06.536: INFO: Pod pvc-snapshottable-data-tester-zgtz8 has the following logs: May 13 08:54:06.536: INFO: Deleting pod "pvc-snapshottable-data-tester-zgtz8" in namespace "snapshotting-1951" May 13 08:54:06.648: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-zgtz8" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 08:55:53.191: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-mfxpbga4.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-1951 exec restored-pvc-tester-2l5zv --namespace=snapshotting-1951 -- cat /mnt/test/data' ... skipping 47 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":34,"completed":10,"skipped":574,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:56:39.301: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 213 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":38,"completed":11,"skipped":692,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 08:56:56.981: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 107 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":34,"completed":10,"skipped":375,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 May 13 08:57:09.370: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 20 lines ... May 13 08:53:58.847: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.compwm4v] to have phase Bound May 13 08:53:58.955: INFO: PersistentVolumeClaim test.csi.azure.compwm4v found but phase is Pending instead of Bound. May 13 08:54:01.061: INFO: PersistentVolumeClaim test.csi.azure.compwm4v found but phase is Pending instead of Bound. May 13 08:54:03.167: INFO: PersistentVolumeClaim test.csi.azure.compwm4v found and phase=Bound (4.319966877s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 08:54:03.590: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-twk4d" in namespace "snapshotting-8692" to be "Succeeded or Failed" May 13 08:54:03.695: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 104.955055ms May 13 08:54:05.802: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211345153s May 13 08:54:07.908: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317571909s May 13 08:54:10.017: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426247479s May 13 08:54:12.123: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532637731s May 13 08:54:14.230: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.639621528s ... skipping 20 lines ... May 13 08:54:58.467: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.876592563s May 13 08:55:00.573: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.982419402s May 13 08:55:02.680: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 59.089362519s May 13 08:55:04.787: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.196730614s May 13 08:55:06.894: INFO: Pod "pvc-snapshottable-tester-twk4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.303147821s [1mSTEP[0m: Saw pod success May 13 08:55:06.894: INFO: Pod "pvc-snapshottable-tester-twk4d" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 08:55:07.000: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.compwm4v] to have phase Bound May 13 08:55:07.105: INFO: PersistentVolumeClaim test.csi.azure.compwm4v found and phase=Bound (105.063091ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 08:55:07.459: INFO: Pod pvc-snapshottable-tester-twk4d has the following logs: ... skipping 14 lines ... May 13 08:55:17.052: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 creationTime:2022-05-13T08:55:13Z readyToUse:true restoreSize:5Gi] May 13 08:55:17.052: INFO: snapshotContentName snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 08:55:17.476: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-7gdtg" in namespace "snapshotting-8692" to be "Succeeded or Failed" May 13 08:55:17.582: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 105.398815ms May 13 08:55:19.688: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211862501s May 13 08:55:21.795: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318835674s May 13 08:55:23.901: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424556748s May 13 08:55:26.008: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53211812s May 13 08:55:28.114: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637409165s ... skipping 5 lines ... May 13 08:55:40.761: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 23.285220984s May 13 08:55:42.868: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 25.391489706s May 13 08:55:44.973: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 27.497063238s May 13 08:55:47.080: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Pending", Reason="", readiness=false. Elapsed: 29.604097051s May 13 08:55:49.188: INFO: Pod "pvc-snapshottable-data-tester-7gdtg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.7117317s [1mSTEP[0m: Saw pod success May 13 08:55:49.188: INFO: Pod "pvc-snapshottable-data-tester-7gdtg" satisfied condition "Succeeded or Failed" May 13 08:55:49.403: INFO: Pod pvc-snapshottable-data-tester-7gdtg has the following logs: May 13 08:55:49.403: INFO: Deleting pod "pvc-snapshottable-data-tester-7gdtg" in namespace "snapshotting-8692" May 13 08:55:49.512: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-7gdtg" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 08:56:50.056: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-mfxpbga4.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-8692 exec restored-pvc-tester-299qz --namespace=snapshotting-8692 -- cat /mnt/test/data' ... skipping 33 lines ... May 13 08:57:16.233: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted May 13 08:57:17.339: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted May 13 08:57:18.446: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted May 13 08:57:19.557: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted May 13 08:57:20.668: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted May 13 08:57:21.776: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted May 13 08:57:22.777: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 08:57:22.884: INFO: Error getting logs for pod restored-pvc-tester-299qz: the server could not find the requested resource (get pods restored-pvc-tester-299qz) May 13 08:57:22.884: INFO: Deleting pod "restored-pvc-tester-299qz" in namespace "snapshotting-8692" May 13 08:57:22.989: INFO: deleting claim "snapshotting-8692"/"pvc-vkgk2" May 13 08:57:23.095: INFO: deleting snapshot "snapshotting-8692"/"snapshot-7hvkl" May 13 08:57:23.200: INFO: deleting snapshot content "snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1" May 13 08:57:23.521: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 to be deleted May 13 08:57:23.628: INFO: volumesnapshotcontents snapcontent-a866a563-c780-4daa-af16-0e6b825ae2b1 has been found and is not deleted ... skipping 27 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":38,"completed":10,"skipped":531,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... May 13 08:56:57.936: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com8bjcr] to have phase Bound May 13 08:56:58.041: INFO: PersistentVolumeClaim test.csi.azure.com8bjcr found but phase is Pending instead of Bound. May 13 08:57:00.146: INFO: PersistentVolumeClaim test.csi.azure.com8bjcr found but phase is Pending instead of Bound. May 13 08:57:02.253: INFO: PersistentVolumeClaim test.csi.azure.com8bjcr found and phase=Bound (4.317259996s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-wfmx [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 08:57:02.570: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-wfmx" in namespace "volume-1804" to be "Succeeded or Failed" May 13 08:57:02.675: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 104.657201ms May 13 08:57:04.780: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209893524s May 13 08:57:06.887: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316265628s May 13 08:57:08.993: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422893656s May 13 08:57:11.099: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528719281s May 13 08:57:13.206: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.635420525s May 13 08:57:15.312: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.741260186s May 13 08:57:17.417: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.846295808s May 13 08:57:19.523: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.952329642s May 13 08:57:21.630: INFO: Pod "exec-volume-test-dynamicpv-wfmx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.059603802s [1mSTEP[0m: Saw pod success May 13 08:57:21.630: INFO: Pod "exec-volume-test-dynamicpv-wfmx" satisfied condition "Succeeded or Failed" May 13 08:57:21.736: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000000 pod exec-volume-test-dynamicpv-wfmx container exec-container-dynamicpv-wfmx: <nil> [1mSTEP[0m: delete the pod May 13 08:57:21.955: INFO: Waiting for pod exec-volume-test-dynamicpv-wfmx to disappear May 13 08:57:22.060: INFO: Pod exec-volume-test-dynamicpv-wfmx no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-wfmx May 13 08:57:22.061: INFO: Deleting pod "exec-volume-test-dynamicpv-wfmx" in namespace "volume-1804" ... skipping 21 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":38,"completed":12,"skipped":702,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 08:58:03.795: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 140 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 5 lines ... test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 08:58:05.910: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:194 May 13 08:58:06.645: INFO: Driver didn't provide topology keys -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/framework/framework.go:188 May 13 08:58:06.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "topology-6981" for this suite. [36m[1mS [SKIPPING] [0.948 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (delayed binding)] topology [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement][0m [90mtest/e2e/storage/testsuites/topology.go:194[0m [36mDriver didn't provide topology keys -- skipping[0m test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m ... skipping 50 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":34,"completed":11,"skipped":735,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 76 lines ... May 13 08:57:10.327: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comjwnsj] to have phase Bound May 13 08:57:10.433: INFO: PersistentVolumeClaim test.csi.azure.comjwnsj found but phase is Pending instead of Bound. May 13 08:57:12.540: INFO: PersistentVolumeClaim test.csi.azure.comjwnsj found but phase is Pending instead of Bound. May 13 08:57:14.645: INFO: PersistentVolumeClaim test.csi.azure.comjwnsj found and phase=Bound (4.317869955s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 08:57:15.070: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-gjxkp" in namespace "snapshotting-7038" to be "Succeeded or Failed" May 13 08:57:15.177: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 106.638602ms May 13 08:57:17.283: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213294161s May 13 08:57:19.389: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319215073s May 13 08:57:21.497: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426908934s May 13 08:57:23.604: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533827944s May 13 08:57:25.711: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641070961s May 13 08:57:27.817: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.747243838s May 13 08:57:29.925: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.854759749s May 13 08:57:32.033: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.962519357s May 13 08:57:34.138: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Pending", Reason="", readiness=false. Elapsed: 19.068185283s May 13 08:57:36.246: INFO: Pod "pvc-snapshottable-tester-gjxkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.175455412s [1mSTEP[0m: Saw pod success May 13 08:57:36.246: INFO: Pod "pvc-snapshottable-tester-gjxkp" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 08:57:36.352: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comjwnsj] to have phase Bound May 13 08:57:36.457: INFO: PersistentVolumeClaim test.csi.azure.comjwnsj found and phase=Bound (105.029831ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 08:57:36.798: INFO: Pod pvc-snapshottable-tester-gjxkp has the following logs: ... skipping 35 lines ... May 13 08:57:49.564: INFO: WaitUntil finished successfully after 106.029314ms [1mSTEP[0m: getting the snapshot and snapshot content [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 08:57:50.095: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-5954r" in namespace "snapshotting-7038" to be "Succeeded or Failed" May 13 08:57:50.200: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 105.477527ms May 13 08:57:52.307: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212213872s May 13 08:57:54.413: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318370674s May 13 08:57:56.519: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424410128s May 13 08:57:58.625: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530306878s May 13 08:58:00.731: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.635904462s ... skipping 6 lines ... May 13 08:58:15.478: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 25.382749496s May 13 08:58:17.585: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 27.490067488s May 13 08:58:19.691: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Pending", Reason="", readiness=false. Elapsed: 29.596240385s May 13 08:58:21.798: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Running", Reason="", readiness=false. Elapsed: 31.70267441s May 13 08:58:23.904: INFO: Pod "pvc-snapshottable-data-tester-5954r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.808497907s [1mSTEP[0m: Saw pod success May 13 08:58:23.904: INFO: Pod "pvc-snapshottable-data-tester-5954r" satisfied condition "Succeeded or Failed" May 13 08:58:24.119: INFO: Pod pvc-snapshottable-data-tester-5954r has the following logs: May 13 08:58:24.119: INFO: Deleting pod "pvc-snapshottable-data-tester-5954r" in namespace "snapshotting-7038" May 13 08:58:24.232: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-5954r" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 08:59:40.764: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-mfxpbga4.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-7038 exec restored-pvc-tester-szh46 --namespace=snapshotting-7038 -- cat /mnt/test/data' ... skipping 33 lines ... May 13 09:00:07.037: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted May 13 09:00:08.144: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted May 13 09:00:09.251: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted May 13 09:00:10.358: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted May 13 09:00:11.464: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted May 13 09:00:12.570: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted May 13 09:00:13.571: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 09:00:13.676: INFO: Error getting logs for pod restored-pvc-tester-szh46: the server could not find the requested resource (get pods restored-pvc-tester-szh46) May 13 09:00:13.676: INFO: Deleting pod "restored-pvc-tester-szh46" in namespace "snapshotting-7038" May 13 09:00:13.781: INFO: deleting claim "snapshotting-7038"/"pvc-c68x7" May 13 09:00:13.887: INFO: deleting snapshot "snapshotting-7038"/"pre-provisioned-snapshot-cf92990c-3a40-49ca-a5d8-e2a0229ab058" May 13 09:00:13.994: INFO: deleting snapshot content "pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058" May 13 09:00:14.317: INFO: Waiting up to 5m0s for volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 to be deleted May 13 09:00:14.423: INFO: volumesnapshotcontents pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058 has been found and is not deleted ... skipping 27 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":34,"completed":11,"skipped":397,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 May 13 09:00:32.039: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 164 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":38,"completed":11,"skipped":545,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 09:00:50.668: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 123 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":33,"completed":11,"skipped":692,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning ... skipping 231 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":38,"completed":13,"skipped":871,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 09:02:57.306: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 167 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":34,"completed":12,"skipped":794,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 176 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:447[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":34,"completed":12,"skipped":632,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support readOnly file specified in the volumeMount [LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:382[0m ... skipping 19 lines ... May 13 09:01:33.947: INFO: PersistentVolumeClaim test.csi.azure.comznxr7 found but phase is Pending instead of Bound. May 13 09:01:36.054: INFO: PersistentVolumeClaim test.csi.azure.comznxr7 found but phase is Pending instead of Bound. May 13 09:01:38.160: INFO: PersistentVolumeClaim test.csi.azure.comznxr7 found but phase is Pending instead of Bound. May 13 09:01:40.266: INFO: PersistentVolumeClaim test.csi.azure.comznxr7 found and phase=Bound (8.530338879s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-7lrn [1mSTEP[0m: Creating a pod to test subpath May 13 09:01:40.584: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7lrn" in namespace "provisioning-1870" to be "Succeeded or Failed" May 13 09:01:40.689: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 105.530617ms May 13 09:01:42.795: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211401s May 13 09:01:44.902: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318220113s May 13 09:01:47.008: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42416204s May 13 09:01:49.115: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531384313s May 13 09:01:51.221: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637115345s ... skipping 14 lines ... May 13 09:02:22.833: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 42.249098424s May 13 09:02:24.939: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 44.35500545s May 13 09:02:27.047: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 46.462882492s May 13 09:02:29.153: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Pending", Reason="", readiness=false. Elapsed: 48.569263884s May 13 09:02:31.259: INFO: Pod "pod-subpath-test-dynamicpv-7lrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.675024858s [1mSTEP[0m: Saw pod success May 13 09:02:31.259: INFO: Pod "pod-subpath-test-dynamicpv-7lrn" satisfied condition "Succeeded or Failed" May 13 09:02:31.364: INFO: Trying to get logs from node k8s-agentpool1-42137015-vmss000001 pod pod-subpath-test-dynamicpv-7lrn container test-container-subpath-dynamicpv-7lrn: <nil> [1mSTEP[0m: delete the pod May 13 09:02:31.609: INFO: Waiting for pod pod-subpath-test-dynamicpv-7lrn to disappear May 13 09:02:31.713: INFO: Pod pod-subpath-test-dynamicpv-7lrn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-7lrn May 13 09:02:31.713: INFO: Deleting pod "pod-subpath-test-dynamicpv-7lrn" in namespace "provisioning-1870" ... skipping 33 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:382[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":33,"completed":12,"skipped":720,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 09:04:04.706: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 45 lines ... test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath file is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:258[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 09:04:05.734: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:258 May 13 09:04:06.483: INFO: Creating resource for dynamic PV May 13 09:04:06.483: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-8960-e2e-sclj64g [1mSTEP[0m: creating a claim May 13 09:04:06.593: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 09:04:06.703: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comt999n] to have phase Bound May 13 09:04:06.810: INFO: PersistentVolumeClaim test.csi.azure.comt999n found but phase is Pending instead of Bound. May 13 09:04:08.918: INFO: PersistentVolumeClaim test.csi.azure.comt999n found but phase is Pending instead of Bound. May 13 09:04:11.027: INFO: PersistentVolumeClaim test.csi.azure.comt999n found but phase is Pending instead of Bound. May 13 09:04:13.136: INFO: PersistentVolumeClaim test.csi.azure.comt999n found and phase=Bound (6.432843323s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-8cqj [1mSTEP[0m: Checking for subpath error in container status May 13 09:04:29.674: INFO: Deleting pod "pod-subpath-test-dynamicpv-8cqj" in namespace "provisioning-8960" May 13 09:04:29.786: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-8cqj" to be fully deleted [1mSTEP[0m: Deleting pod May 13 09:04:32.001: INFO: Deleting pod "pod-subpath-test-dynamicpv-8cqj" in namespace "provisioning-8960" [1mSTEP[0m: Deleting pvc May 13 09:04:32.108: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comt999n" ... skipping 12 lines ... [32m• [SLOW TEST:47.565 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath file is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:258[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":33,"completed":13,"skipped":922,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 09:04:53.309: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 135 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":38,"completed":14,"skipped":1014,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 09:06:20.548: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 195 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":34,"completed":13,"skipped":968,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m May 13 09:06:27.700: INFO: Running AfterSuite actions on all nodes May 13 09:06:27.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 09:06:27.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 103 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":33,"completed":14,"skipped":946,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 09:07:00.339: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 228 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":34,"completed":13,"skipped":677,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 May 13 09:07:00.891: INFO: Driver test.csi.azure.com doesn't specify snapshot stress test options -- skipping ... skipping 110 lines ... May 13 09:03:25.695: INFO: PersistentVolumeClaim test.csi.azure.commzgt4 found but phase is Pending instead of Bound. May 13 09:03:27.805: INFO: PersistentVolumeClaim test.csi.azure.commzgt4 found but phase is Pending instead of Bound. May 13 09:03:29.912: INFO: PersistentVolumeClaim test.csi.azure.commzgt4 found but phase is Pending instead of Bound. May 13 09:03:32.021: INFO: PersistentVolumeClaim test.csi.azure.commzgt4 found and phase=Bound (2m40.33521738s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 09:03:32.453: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-482ft" in namespace "snapshotting-7330" to be "Succeeded or Failed" May 13 09:03:32.561: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 107.739637ms May 13 09:03:34.670: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216726562s May 13 09:03:36.779: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326032063s May 13 09:03:38.888: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434205646s May 13 09:03:40.997: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543100547s May 13 09:03:43.108: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655000352s May 13 09:03:45.216: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76275262s May 13 09:03:47.333: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14.880016815s May 13 09:03:49.442: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Pending", Reason="", readiness=false. Elapsed: 16.988142226s May 13 09:03:51.551: INFO: Pod "pvc-snapshottable-tester-482ft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.097464066s [1mSTEP[0m: Saw pod success May 13 09:03:51.551: INFO: Pod "pvc-snapshottable-tester-482ft" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 09:03:51.659: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.commzgt4] to have phase Bound May 13 09:03:51.766: INFO: PersistentVolumeClaim test.csi.azure.commzgt4 found and phase=Bound (107.351553ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 09:03:52.139: INFO: Pod pvc-snapshottable-tester-482ft has the following logs: ... skipping 41 lines ... May 13 09:04:17.293: INFO: WaitUntil finished successfully after 107.968486ms [1mSTEP[0m: getting the snapshot and snapshot content [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 09:04:17.840: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-m88kf" in namespace "snapshotting-7330" to be "Succeeded or Failed" May 13 09:04:17.947: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 107.30061ms May 13 09:04:20.055: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214700315s May 13 09:04:22.163: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323055316s May 13 09:04:24.272: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431590094s May 13 09:04:26.379: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53918355s May 13 09:04:28.486: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.6464702s ... skipping 8 lines ... May 13 09:04:47.466: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.625596153s May 13 09:04:49.573: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 31.732969445s May 13 09:04:51.681: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 33.840833026s May 13 09:04:53.788: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Pending", Reason="", readiness=false. Elapsed: 35.948551602s May 13 09:04:55.897: INFO: Pod "pvc-snapshottable-data-tester-m88kf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.057038134s [1mSTEP[0m: Saw pod success May 13 09:04:55.897: INFO: Pod "pvc-snapshottable-data-tester-m88kf" satisfied condition "Succeeded or Failed" May 13 09:04:56.351: INFO: Pod pvc-snapshottable-data-tester-m88kf has the following logs: May 13 09:04:56.351: INFO: Deleting pod "pvc-snapshottable-data-tester-m88kf" in namespace "snapshotting-7330" May 13 09:04:56.465: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-m88kf" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 09:06:15.009: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-mfxpbga4.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-7330 exec restored-pvc-tester-x8kzk --namespace=snapshotting-7330 -- cat /mnt/test/data' ... skipping 47 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":38,"completed":12,"skipped":657,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 09:07:01.151: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 8 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 52 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 May 13 09:07:01.661: INFO: volume type "DynamicPV" is not ephemeral [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 09:07:01.769: INFO: Error getting logs for pod restored-pvc-tester-szh46: the server could not find the requested resource (get pods restored-pvc-tester-szh46) May 13 09:07:01.769: INFO: Deleting pod "restored-pvc-tester-szh46" in namespace "snapshotting-7038" May 13 09:07:01.876: INFO: deleting claim "snapshotting-7038"/"pvc-c68x7" May 13 09:07:01.983: INFO: deleting snapshot content "pre-provisioned-snapcontent-cf92990c-3a40-49ca-a5d8-e2a0229ab058" May 13 09:07:02.090: INFO: deleting snapshot class "snapshotting-7038x7cmc" May 13 09:07:02.197: INFO: Waiting up to 5m0s for volumesnapshotclasses snapshotting-7038x7cmc to be deleted May 13 09:07:02.304: INFO: volumesnapshotclasses snapshotting-7038x7cmc is not found and has been deleted ... skipping 80 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumeIO [90mtest/e2e/storage/framework/testsuite.go:50[0m should write files of various sizes, verify size, validate content [Slow] [90mtest/e2e/storage/testsuites/volume_io.go:149[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]","total":34,"completed":14,"skipped":783,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 09:08:21.098: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 128 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support restarting containers using directory as subpath [Slow] [90mtest/e2e/storage/testsuites/subpath.go:322[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":38,"completed":13,"skipped":827,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the single volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:298[0m ... skipping 148 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":34,"completed":15,"skipped":953,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37mtest/e2e/storage/testsuites/volumemode.go:299[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 09:10:01.067: INFO: >>> kubeConfig: /root/tmp4042645124/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] test/e2e/storage/testsuites/volumemode.go:299 May 13 09:10:01.816: INFO: Creating resource for dynamic PV May 13 09:10:01.816: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-6563-e2e-scz8g6r [1mSTEP[0m: creating a claim May 13 09:10:02.033: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.coml4wq9] to have phase Bound May 13 09:10:02.141: INFO: PersistentVolumeClaim test.csi.azure.coml4wq9 found but phase is Pending instead of Bound. May 13 09:10:04.251: INFO: PersistentVolumeClaim test.csi.azure.coml4wq9 found but phase is Pending instead of Bound. May 13 09:10:06.360: INFO: PersistentVolumeClaim test.csi.azure.coml4wq9 found and phase=Bound (4.326300559s) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail May 13 09:10:09.011: INFO: Deleting pod "pod-1f03bef6-d52b-4db6-b557-3e204433c3e1" in namespace "volumemode-6563" May 13 09:10:09.121: INFO: Wait up to 5m0s for pod "pod-1f03bef6-d52b-4db6-b557-3e204433c3e1" to be fully deleted [1mSTEP[0m: Deleting pvc May 13 09:10:11.335: INFO: Deleting PersistentVolumeClaim "test.csi.azure.coml4wq9" May 13 09:10:11.444: INFO: Waiting up to 5m0s for PersistentVolume pvc-e467c742-e267-43dc-b55d-6227bb601224 to get deleted May 13 09:10:11.551: INFO: PersistentVolume pvc-e467c742-e267-43dc-b55d-6227bb601224 found and phase=Released (107.529756ms) ... skipping 20 lines ... [32m• [SLOW TEST:82.547 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":34,"completed":16,"skipped":993,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral[0m [1mshould support two pods which have the same volume definition[0m [37mtest/e2e/storage/testsuites/ephemeral.go:216[0m ... skipping 63 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition","total":38,"completed":14,"skipped":849,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 09:13:06.331: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 216 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":38,"completed":15,"skipped":905,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits[0m [1mshould verify that all csinodes have volume limits[0m [37mtest/e2e/storage/testsuites/volumelimits.go:249[0m ... skipping 16 lines ... test/e2e/framework/framework.go:188 May 13 09:15:47.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-2383" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":38,"completed":16,"skipped":906,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 09:15:47.436: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 116 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":38,"completed":17,"skipped":1000,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m May 13 09:17:21.424: INFO: Running AfterSuite actions on all nodes May 13 09:17:21.424: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 09:17:21.424: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 15 lines ... May 13 09:17:21.483: INFO: Running AfterSuite actions on node 1 [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mExternal Storage [Driver: test.csi.azure.com] [0m[0m[Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [0m[91m[1m[Measurement] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [0m [37mtest/e2e/storage/testsuites/multivolume.go:129[0m [1m[91mRan 91 of 7227 Specs in 3439.947 seconds[0m [1m[91mFAIL![0m -- [32m[1m90 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m7136 Skipped[0m Ginkgo ran 1 suite in 57m23.146653288s Test Suite Failed + print_logs + sed -i s/disk.csi.azure.com/test.csi.azure.com/g deploy/example/storageclass-azuredisk-csi.yaml + bash ./hack/verify-examples.sh linux azurepubliccloud ephemeral test begin to create deployment examples ... storageclass.storage.k8s.io/managed-csi created Applying config "deploy/example/deployment.yaml" ... skipping 80 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 08:19:54.876253 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 08:19:54.876716 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 08:19:54.899604 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 08:19:54.899631 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 08:19:54.899641 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 08:19:54.899674 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 08:19:54.901365 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 08:19:54.901392 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 08:19:54.901398 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 08:19:54.901455 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for 0000414c-5950-4a10-a61f-5d202a75cd00. Invalid resource Id format I0513 08:19:54.901509 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 08:19:54.901564 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:54.901577 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:54.901592 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:54.901603 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:54.901627 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 136 lines ... I0513 08:20:08.350893 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb to node k8s-agentpool1-42137015-vmss000001. I0513 08:20:08.512565 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb to node k8s-agentpool1-42137015-vmss000001 I0513 08:20:08.512619 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb false 0})] I0513 08:20:08.512642 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb false 0})]) I0513 08:20:09.307929 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:20:09.307965 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-56626290-8cda-4038-adc8-2930bc6711bc","csi.storage.k8s.io/pvc/name":"test.csi.azure.comrb2lm","csi.storage.k8s.io/pvc/namespace":"provisioning-8916","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc"} I0513 08:20:09.308581 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:09.335172 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc to node k8s-agentpool1-42137015-vmss000002. I0513 08:20:09.335216 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc to node k8s-agentpool1-42137015-vmss000002 I0513 08:20:09.335257 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-56626290-8cda-4038-adc8-2930bc6711bc false 0})] I0513 08:20:09.335295 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-56626290-8cda-4038-adc8-2930bc6711bc false 0})]) I0513 08:20:09.551823 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-56626290-8cda-4038-adc8-2930bc6711bc false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:09.595825 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:20:09.595849 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext3"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6c55fd70-d1db-4014-a018-947e225d35b7","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8pl68","csi.storage.k8s.io/pvc/namespace":"volume-4983","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7"} I0513 08:20:09.645449 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 to node k8s-agentpool1-42137015-vmss000000. I0513 08:20:09.645504 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 to node k8s-agentpool1-42137015-vmss000000 I0513 08:20:09.645531 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6c55fd70-d1db-4014-a018-947e225d35b7 false 0})] I0513 08:20:09.645556 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6c55fd70-d1db-4014-a018-947e225d35b7 false 0})]) ... skipping 7 lines ... I0513 08:20:09.753180 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 to node k8s-agentpool1-42137015-vmss000000 I0513 08:20:09.798969 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:20:09.798995 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c912beda-4a12-445a-8289-efed36ad2787","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6zp8x","csi.storage.k8s.io/pvc/namespace":"multivolume-4786","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787"} I0513 08:20:09.885731 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 to node k8s-agentpool1-42137015-vmss000001. I0513 08:20:09.885788 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:20:09.941955 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 to node k8s-agentpool1-42137015-vmss000001 I0513 08:20:09.946189 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6c55fd70-d1db-4014-a018-947e225d35b7 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:11.524959 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13 StorageAccountType:StandardSSD_LRS Size:5 I0513 08:20:11.525016 1 controllerserver.go:258] create azure disk(pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13 kubernetes.io-created-for-pvc-name:pvc-snapshottable-tester-4f2dg-my-volume kubernetes.io-created-for-pvc-namespace:snapshotting-2132]) successfully I0513 08:20:11.525058 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.566058158 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13" result_code="succeeded" I0513 08:20:11.525074 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13","csi.storage.k8s.io/pvc/name":"pvc-snapshottable-tester-4f2dg-my-volume","csi.storage.k8s.io/pvc/namespace":"snapshotting-2132","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13"}} I0513 08:20:13.229521 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:20:13.229555 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13","csi.storage.k8s.io/pvc/name":"pvc-snapshottable-tester-4f2dg-my-volume","csi.storage.k8s.io/pvc/namespace":"snapshotting-2132","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13"} ... skipping 20 lines ... I0513 08:20:19.716166 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc attached to node k8s-agentpool1-42137015-vmss000002. I0513 08:20:19.716203 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc to node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:20:19.716243 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.38104995 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:20:19.716262 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 08:20:19.716409 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4 lun 1 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c034309c-f1c2-4edb-ba79-0e5faf360268:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c034309c-f1c2-4edb-ba79-0e5faf360268 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4 false 1})] I0513 08:20:19.716472 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c034309c-f1c2-4edb-ba79-0e5faf360268:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c034309c-f1c2-4edb-ba79-0e5faf360268 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4 false 1})]) I0513 08:20:19.719327 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c912beda-4a12-445a-8289-efed36ad2787 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:19.931101 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c034309c-f1c2-4edb-ba79-0e5faf360268:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c034309c-f1c2-4edb-ba79-0e5faf360268 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:20.242813 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 attached to node k8s-agentpool1-42137015-vmss000000. I0513 08:20:20.242848 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:20:20.242877 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.59742043 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:20:20.242889 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 08:20:20.242933 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:20:20.359379 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 lun 1 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 false 1})] I0513 08:20:20.359437 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 false 1})]) I0513 08:20:20.553090 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:28.152097 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 08:20:28.152121 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb"} I0513 08:20:28.152317 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb from node k8s-agentpool1-42137015-vmss000001 I0513 08:20:28.678473 1 utils.go:77] GRPC call: /csi.v1.Identity/GetPluginInfo I0513 08:20:28.678504 1 utils.go:78] GRPC request: {} I0513 08:20:28.678559 1 utils.go:84] GRPC response: {"name":"test.csi.azure.com","vendor_version":"v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987"} ... skipping 98 lines ... I0513 08:20:52.470438 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc from node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:20:52.470481 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.682959318 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-56626290-8cda-4038-adc8-2930bc6711bc" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:20:52.470499 1 utils.go:84] GRPC response: {} I0513 08:20:52.470578 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000002, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:20:52.577034 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cc6f7400-ad49-48d8-9750-76d210f855c6 lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-cc6f7400-ad49-48d8-9750-76d210f855c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cc6f7400-ad49-48d8-9750-76d210f855c6 false 0})] I0513 08:20:52.577092 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-cc6f7400-ad49-48d8-9750-76d210f855c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cc6f7400-ad49-48d8-9750-76d210f855c6 false 0})]) I0513 08:20:52.809133 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-cc6f7400-ad49-48d8-9750-76d210f855c6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cc6f7400-ad49-48d8-9750-76d210f855c6 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:54.326540 1 controllerserver.go:860] create snapshot(snapshot-2481cd6d-9bc3-4c3d-b36a-03f6e8525c7a) under rg(kubetest-mfxpbga4) successfully I0513 08:20:54.364817 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.565244642 request="azuredisk_csi_driver_controller_create_snapshot" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" source_resource_id="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787" snapshot_name="snapshot-2481cd6d-9bc3-4c3d-b36a-03f6e8525c7a" result_code="succeeded" I0513 08:20:54.364864 1 utils.go:84] GRPC response: {"snapshot":{"creation_time":{"nanos":17634500,"seconds":1652430052},"ready_to_use":true,"size_bytes":5368709120,"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-2481cd6d-9bc3-4c3d-b36a-03f6e8525c7a","source_volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787"}} I0513 08:20:55.370757 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb:pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb]) returned with <nil> I0513 08:20:55.370814 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb) succeeded I0513 08:20:55.370837 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-603f1bc4-bdda-4cf9-a7ed-8c403f1b36bb from node k8s-agentpool1-42137015-vmss000001 successfully ... skipping 15 lines ... I0513 08:20:56.387180 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a699729d-bcf5-4770-89fa-830b3c743dfa false 0})] I0513 08:20:56.387234 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a699729d-bcf5-4770-89fa-830b3c743dfa false 0})]) I0513 08:20:56.425032 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 08:20:56.425056 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf","parameters":{"csi.storage.k8s.io/pv/name":"pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6zp8x-restored","csi.storage.k8s.io/pvc/namespace":"multivolume-4786"},"volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":7}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-2481cd6d-9bc3-4c3d-b36a-03f6e8525c7a"}}}} I0513 08:20:56.425227 1 controllerserver.go:174] begin to create azure disk(pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 08:20:56.425251 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf StorageAccountType:StandardSSD_LRS Size:5 I0513 08:20:56.593175 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a699729d-bcf5-4770-89fa-830b3c743dfa false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:20:58.886414 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf StorageAccountType:StandardSSD_LRS Size:5 I0513 08:20:58.886471 1 controllerserver.go:258] create azure disk(pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf kubernetes.io-created-for-pvc-name:test.csi.azure.com6zp8x-restored kubernetes.io-created-for-pvc-namespace:multivolume-4786]) successfully I0513 08:20:58.886521 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.461245613 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf" result_code="succeeded" I0513 08:20:58.886541 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":{"Snapshot":{"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-2481cd6d-9bc3-4c3d-b36a-03f6e8525c7a"}}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6zp8x-restored","csi.storage.k8s.io/pvc/namespace":"multivolume-4786","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf"}} I0513 08:20:59.057913 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 08:20:59.057941 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787"} ... skipping 260 lines ... I0513 08:21:46.478685 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7 lun 10 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-048ed4a9-c6f9-490b-b919-45ce3581fc23:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-048ed4a9-c6f9-490b-b919-45ce3581fc23 false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-063325ae-efda-450d-9f0e-d8edd9fabe2f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-063325ae-efda-450d-9f0e-d8edd9fabe2f false 12}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-0b2424ad-4bf8-4a02-872c-b1add67fbdd9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0b2424ad-4bf8-4a02-872c-b1add67fbdd9 false 11}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43 false 14}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-14e61adb-ed9c-40be-acdf-2ceb9fc1242b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14e61adb-ed9c-40be-acdf-2ceb9fc1242b false 6}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-313c7f37-d88d-4f6d-81ce-3e00a6b334fc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-313c7f37-d88d-4f6d-81ce-3e00a6b334fc false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4d2ec6ab-1614-4434-8641-06903fbf947f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4d2ec6ab-1614-4434-8641-06903fbf947f false 8}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-62e5415d-7fa6-4bf0-80c5-b59030d71be6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62e5415d-7fa6-4bf0-80c5-b59030d71be6 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-65bb2528-a13b-40cd-85d6-9b18fbfd4bbf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-65bb2528-a13b-40cd-85d6-9b18fbfd4bbf false 9}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7e4d74c1-1542-462a-a8d6-ac19c9bfaefd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7e4d74c1-1542-462a-a8d6-ac19c9bfaefd false 5}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-88599b36-b851-4df8-9db8-357306b762f3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88599b36-b851-4df8-9db8-357306b762f3 false 13}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-92f9b4cf-4eb4-46d8-b9a5-edbd90fbfa4d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-92f9b4cf-4eb4-46d8-b9a5-edbd90fbfa4d false 7}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c4d21dbd-1bf5-44c1-9b3f-4b61ca5a8e8b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c4d21dbd-1bf5-44c1-9b3f-4b61ca5a8e8b false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7 false 10}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fec67df2-66a6-44ab-a866-d558dd14c813:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fec67df2-66a6-44ab-a866-d558dd14c813 false 2})] I0513 08:21:46.478772 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-048ed4a9-c6f9-490b-b919-45ce3581fc23:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-048ed4a9-c6f9-490b-b919-45ce3581fc23 false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-063325ae-efda-450d-9f0e-d8edd9fabe2f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-063325ae-efda-450d-9f0e-d8edd9fabe2f false 12}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-0b2424ad-4bf8-4a02-872c-b1add67fbdd9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0b2424ad-4bf8-4a02-872c-b1add67fbdd9 false 11}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43 false 14}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-14e61adb-ed9c-40be-acdf-2ceb9fc1242b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14e61adb-ed9c-40be-acdf-2ceb9fc1242b false 6}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-313c7f37-d88d-4f6d-81ce-3e00a6b334fc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-313c7f37-d88d-4f6d-81ce-3e00a6b334fc false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4d2ec6ab-1614-4434-8641-06903fbf947f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4d2ec6ab-1614-4434-8641-06903fbf947f false 8}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-62e5415d-7fa6-4bf0-80c5-b59030d71be6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62e5415d-7fa6-4bf0-80c5-b59030d71be6 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-65bb2528-a13b-40cd-85d6-9b18fbfd4bbf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-65bb2528-a13b-40cd-85d6-9b18fbfd4bbf false 9}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7e4d74c1-1542-462a-a8d6-ac19c9bfaefd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7e4d74c1-1542-462a-a8d6-ac19c9bfaefd false 5}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-88599b36-b851-4df8-9db8-357306b762f3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88599b36-b851-4df8-9db8-357306b762f3 false 13}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-92f9b4cf-4eb4-46d8-b9a5-edbd90fbfa4d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-92f9b4cf-4eb4-46d8-b9a5-edbd90fbfa4d false 7}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c4d21dbd-1bf5-44c1-9b3f-4b61ca5a8e8b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c4d21dbd-1bf5-44c1-9b3f-4b61ca5a8e8b false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7 false 10}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fec67df2-66a6-44ab-a866-d558dd14c813:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fec67df2-66a6-44ab-a866-d558dd14c813 false 2})]) I0513 08:21:46.642910 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:21:46.642936 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c912beda-4a12-445a-8289-efed36ad2787","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6zp8x","csi.storage.k8s.io/pvc/namespace":"multivolume-4786","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787"} I0513 08:21:46.678162 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 to node k8s-agentpool1-42137015-vmss000000. I0513 08:21:46.678211 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 to node k8s-agentpool1-42137015-vmss000000 I0513 08:21:47.001333 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-048ed4a9-c6f9-490b-b919-45ce3581fc23:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-048ed4a9-c6f9-490b-b919-45ce3581fc23 false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-063325ae-efda-450d-9f0e-d8edd9fabe2f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-063325ae-efda-450d-9f0e-d8edd9fabe2f false 12}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-0b2424ad-4bf8-4a02-872c-b1add67fbdd9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0b2424ad-4bf8-4a02-872c-b1add67fbdd9 false 11}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43 false 14}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-14e61adb-ed9c-40be-acdf-2ceb9fc1242b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-14e61adb-ed9c-40be-acdf-2ceb9fc1242b false 6}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-313c7f37-d88d-4f6d-81ce-3e00a6b334fc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-313c7f37-d88d-4f6d-81ce-3e00a6b334fc false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4d2ec6ab-1614-4434-8641-06903fbf947f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4d2ec6ab-1614-4434-8641-06903fbf947f false 8}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-62e5415d-7fa6-4bf0-80c5-b59030d71be6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-62e5415d-7fa6-4bf0-80c5-b59030d71be6 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-65bb2528-a13b-40cd-85d6-9b18fbfd4bbf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-65bb2528-a13b-40cd-85d6-9b18fbfd4bbf false 9}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7e4d74c1-1542-462a-a8d6-ac19c9bfaefd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7e4d74c1-1542-462a-a8d6-ac19c9bfaefd false 5}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-88599b36-b851-4df8-9db8-357306b762f3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88599b36-b851-4df8-9db8-357306b762f3 false 13}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-92f9b4cf-4eb4-46d8-b9a5-edbd90fbfa4d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-92f9b4cf-4eb4-46d8-b9a5-edbd90fbfa4d false 7}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c4d21dbd-1bf5-44c1-9b3f-4b61ca5a8e8b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c4d21dbd-1bf5-44c1-9b3f-4b61ca5a8e8b false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7 false 10}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fec67df2-66a6-44ab-a866-d558dd14c813:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fec67df2-66a6-44ab-a866-d558dd14c813 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:21:47.413536 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7:pvc-6c55fd70-d1db-4014-a018-947e225d35b7 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:pvc-f49521e7-371f-4fff-ae6c-e282f69f5889]) returned with <nil> I0513 08:21:47.413593 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-f49521e7-371f-4fff-ae6c-e282f69f5889, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889) succeeded I0513 08:21:47.413616 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 from node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:21:47.413647 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=57.617231703 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:21:47.413658 1 utils.go:84] GRPC response: {} I0513 08:21:47.413749 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 from node k8s-agentpool1-42137015-vmss000000, diskMap: map[] I0513 08:21:47.413791 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:21:47.528467 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-6c55fd70-d1db-4014-a018-947e225d35b7, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7) succeeded I0513 08:21:47.528516 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7 from node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:21:47.528554 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=51.22799058 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6c55fd70-d1db-4014-a018-947e225d35b7" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:21:47.528567 1 utils.go:84] GRPC response: {} I0513 08:21:47.528655 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 lun 1 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c912beda-4a12-445a-8289-efed36ad2787 false 1})] I0513 08:21:47.528695 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c912beda-4a12-445a-8289-efed36ad2787 false 1})]) I0513 08:21:47.723616 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c912beda-4a12-445a-8289-efed36ad2787 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:21:48.740773 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c034309c-f1c2-4edb-ba79-0e5faf360268:pvc-c034309c-f1c2-4edb-ba79-0e5faf360268 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4:pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4]) returned with <nil> I0513 08:21:48.740826 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4) succeeded I0513 08:21:48.740874 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4 from node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:21:48.740906 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=31.411988418 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:21:48.740923 1 utils.go:84] GRPC response: {} I0513 08:21:48.740971 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c034309c-f1c2-4edb-ba79-0e5faf360268 from node k8s-agentpool1-42137015-vmss000002, diskMap: map[] ... skipping 9 lines ... I0513 08:22:07.158361 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7 attached to node k8s-agentpool1-42137015-vmss000001. I0513 08:22:07.158398 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 08:22:07.158438 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=47.76358482 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ec7d3c3e-7b75-4420-b264-9bdb57b59ba7" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 08:22:07.158452 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"10"}} I0513 08:22:07.158487 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-88599b36-b851-4df8-9db8-357306b762f3 lun 13 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 false 15})] I0513 08:22:07.158545 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 false 15})]) I0513 08:22:07.448940 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 false 15})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:22:07.871155 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 attached to node k8s-agentpool1-42137015-vmss000000. I0513 08:22:07.871195 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:22:07.871225 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=21.193054296 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:22:07.871241 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 08:22:14.402724 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 08:22:14.402749 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-db18adc0-ff17-40b6-9dc7-3e4789d12e13"} ... skipping 26 lines ... I0513 08:22:23.043214 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:22:23.043244 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6zp8x-restored","csi.storage.k8s.io/pvc/namespace":"multivolume-4786","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf"} I0513 08:22:23.084898 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf to node k8s-agentpool1-42137015-vmss000000. I0513 08:22:23.084964 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf to node k8s-agentpool1-42137015-vmss000000 I0513 08:22:23.084996 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf lun 2 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf false 2})] I0513 08:22:23.085050 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf false 2})]) I0513 08:22:23.325409 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:22:25.322933 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4 I0513 08:22:25.322979 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4) returned with <nil> I0513 08:22:25.323019 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.286380335 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e4a4d0c8-4bb2-4d6f-bb3b-f0a7ebe2baf4" result_code="succeeded" I0513 08:22:25.323039 1 utils.go:84] GRPC response: {} I0513 08:22:28.903873 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 08:22:28.903901 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c034309c-f1c2-4edb-ba79-0e5faf360268"} ... skipping 138 lines ... I0513 08:22:39.245053 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cc6f7400-ad49-48d8-9750-76d210f855c6 from node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:22:39.245079 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.431103747 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cc6f7400-ad49-48d8-9750-76d210f855c6" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:22:39.245092 1 utils.go:84] GRPC response: {} I0513 08:22:39.245169 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000002, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:22:39.332340 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f false 0})] I0513 08:22:39.332401 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f false 0})]) I0513 08:22:39.501174 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:22:40.772634 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 08:22:40.772659 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-b9710dfd-b210-44ed-a366-90b5727c5867","parameters":{"csi.storage.k8s.io/pv/name":"pvc-b9710dfd-b210-44ed-a366-90b5727c5867","csi.storage.k8s.io/pvc/name":"test.csi.azure.comdsnlj","csi.storage.k8s.io/pvc/namespace":"provisioning-5900"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 08:22:40.772831 1 controllerserver.go:174] begin to create azure disk(pvc-b9710dfd-b210-44ed-a366-90b5727c5867) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 08:22:40.772873 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-b9710dfd-b210-44ed-a366-90b5727c5867 StorageAccountType:StandardSSD_LRS Size:5 I0513 08:22:41.456351 1 controllerserver.go:904] delete snapshot(snapshot-ab986354-0962-4646-a61f-b66e57446c04) under rg(kubetest-mfxpbga4) successfully I0513 08:22:41.456396 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.248949175 request="azuredisk_csi_driver_controller_delete_snapshot" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" snapshot_id="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-ab986354-0962-4646-a61f-b66e57446c04" result_code="succeeded" ... skipping 73 lines ... I0513 08:23:09.922425 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f","csi.storage.k8s.io/pvc/name":"pvc-2l4kf","csi.storage.k8s.io/pvc/namespace":"provisioning-2030","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f"} I0513 08:23:09.945984 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f to node k8s-agentpool1-42137015-vmss000002. I0513 08:23:09.946031 1 azure_controller_common.go:453] azureDisk - find disk: lun 0 name pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f I0513 08:23:09.946041 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f is already attached to node k8s-agentpool1-42137015-vmss000002 at lun 0. I0513 08:23:09.946079 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=8.0101e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:23:09.946095 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 08:23:10.102195 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9710dfd-b210-44ed-a366-90b5727c5867 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:23:12.235477 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa:pvc-a699729d-bcf5-4770-89fa-830b3c743dfa]) returned with <nil> I0513 08:23:12.235534 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-a699729d-bcf5-4770-89fa-830b3c743dfa, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa) succeeded I0513 08:23:12.235568 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa from node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:23:12.235608 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=30.384495355 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:23:12.235635 1 utils.go:84] GRPC response: {} I0513 08:23:12.235705 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:23:12.245173 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 08:23:12.245196 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa"} I0513 08:23:12.245296 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa from node k8s-agentpool1-42137015-vmss000000 I0513 08:23:12.320986 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc false 0})] I0513 08:23:12.321049 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc false 0})]) I0513 08:23:12.514260 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:23:13.631044 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:23:13.631075 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-20052deb-f67f-4045-8945-1162fedb5032","csi.storage.k8s.io/pvc/name":"volume-limits-bvgpd-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-736","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032"} I0513 08:23:13.661466 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001. I0513 08:23:13.661532 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001 I0513 08:23:13.661555 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 lun 16 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})] I0513 08:23:13.661593 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) I0513 08:23:13.746769 1 azure_armclient.go:153] Send.sendRequest original response: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.746796 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 08:23:13.746831 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1?api-version=2020-12-01, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.746841 1 azure_armclient.go:614] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1, error: %!s(<nil>) E0513 08:23:13.747000 1 azure_controller_vmss.go:112] azureDisk - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) on rg(kubetest-mfxpbga4) vm(k8s-agentpool1-42137015-vmss000001) failed, err: &{false 409 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }} I0513 08:23:13.747038 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})], &{%!s(bool=false) %!s(int=409) 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }}) returned with %!v(MISSING) E0513 08:23:13.747074 1 controllerserver.go:402] Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.747147 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.085662441 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032" node="k8s-agentpool1-42137015-vmss000001" result_code="failed" E0513 08:23:13.747170 1 utils.go:82] GRPC error: Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.753156 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:23:13.753178 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-20052deb-f67f-4045-8945-1162fedb5032","csi.storage.k8s.io/pvc/name":"volume-limits-bvgpd-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-736","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032"} I0513 08:23:13.801694 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001. I0513 08:23:13.801758 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:23:13.862652 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001 I0513 08:23:13.862712 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 lun 16 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})] I0513 08:23:13.862761 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) I0513 08:23:13.951743 1 azure_armclient.go:153] Send.sendRequest original response: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.951767 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 08:23:13.951785 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1?api-version=2020-12-01, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.951794 1 azure_armclient.go:614] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1, error: %!s(<nil>) E0513 08:23:13.951842 1 azure_controller_vmss.go:112] azureDisk - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) on rg(kubetest-mfxpbga4) vm(k8s-agentpool1-42137015-vmss000001) failed, err: &{false 409 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }} I0513 08:23:13.951868 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})], &{%!s(bool=false) %!s(int=409) 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }}) returned with %!v(MISSING) E0513 08:23:13.951898 1 controllerserver.go:402] Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:13.951929 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.150233325 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032" node="k8s-agentpool1-42137015-vmss000001" result_code="failed" E0513 08:23:13.951944 1 utils.go:82] GRPC error: Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:14.753556 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:23:14.753587 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-20052deb-f67f-4045-8945-1162fedb5032","csi.storage.k8s.io/pvc/name":"volume-limits-bvgpd-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-736","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032"} I0513 08:23:14.777836 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001. I0513 08:23:14.777898 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:23:14.837923 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001 I0513 08:23:14.837974 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 lun 16 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})] I0513 08:23:14.838022 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) I0513 08:23:14.922933 1 azure_armclient.go:153] Send.sendRequest original response: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:14.922968 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 08:23:14.922990 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1?api-version=2020-12-01, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:14.923025 1 azure_armclient.go:614] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1, error: %!s(<nil>) E0513 08:23:14.923090 1 azure_controller_vmss.go:112] azureDisk - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) on rg(kubetest-mfxpbga4) vm(k8s-agentpool1-42137015-vmss000001) failed, err: &{false 409 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }} I0513 08:23:14.923137 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})], &{%!s(bool=false) %!s(int=409) 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }}) returned with %!v(MISSING) E0513 08:23:14.923182 1 controllerserver.go:402] Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:14.923219 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.145377589 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032" node="k8s-agentpool1-42137015-vmss000001" result_code="failed" E0513 08:23:14.923232 1 utils.go:82] GRPC error: Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:18.933550 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:23:18.933579 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-20052deb-f67f-4045-8945-1162fedb5032","csi.storage.k8s.io/pvc/name":"volume-limits-bvgpd-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-736","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032"} I0513 08:23:19.023706 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001. I0513 08:23:19.023763 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:23:19.116015 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to node k8s-agentpool1-42137015-vmss000001 I0513 08:23:19.116082 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 lun 16 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})] I0513 08:23:19.116133 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) I0513 08:23:19.212892 1 azure_armclient.go:153] Send.sendRequest original response: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:19.212926 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 08:23:19.212949 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1?api-version=2020-12-01, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:19.212957 1 azure_armclient.go:614] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-42137015-vmss/virtualMachines/1, error: %!s(<nil>) E0513 08:23:19.213008 1 azure_controller_vmss.go:112] azureDisk - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})]) on rg(kubetest-mfxpbga4) vm(k8s-agentpool1-42137015-vmss000001) failed, err: &{false 409 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }} I0513 08:23:19.213046 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 16})], &{%!s(bool=false) %!s(int=409) 0001-01-01 00:00:00 +0000 UTC { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } }}) returned with %!v(MISSING) E0513 08:23:19.213082 1 controllerserver.go:402] Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:19.213125 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.189400418 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032" node="k8s-agentpool1-42137015-vmss000001" result_code="failed" E0513 08:23:19.213138 1 utils.go:82] GRPC error: Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 to instance k8s-agentpool1-42137015-vmss000001 failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: { "error": { "code": "OperationNotAllowed", "message": "The maximum number of data disks allowed to be attached to a VM of this size is 16.", "target": "dataDisks" } } I0513 08:23:21.796473 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume ... skipping 48 lines ... I0513 08:23:45.408351 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b9710dfd-b210-44ed-a366-90b5727c5867","csi.storage.k8s.io/pvc/name":"test.csi.azure.comdsnlj","csi.storage.k8s.io/pvc/namespace":"provisioning-5900","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867"} I0513 08:23:45.460928 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867 to node k8s-agentpool1-42137015-vmss000002. I0513 08:23:45.460983 1 azure_controller_common.go:453] azureDisk - find disk: lun 1 name pvc-b9710dfd-b210-44ed-a366-90b5727c5867 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867 I0513 08:23:45.461001 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867 is already attached to node k8s-agentpool1-42137015-vmss000002 at lun 1. I0513 08:23:45.461043 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=9.5801e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:23:45.461066 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 08:23:45.662863 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-21a6052e-a4ed-4f79-99db-0ba1e47a98c9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-21a6052e-a4ed-4f79-99db-0ba1e47a98c9 false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:23:47.342357 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889:pvc-f49521e7-371f-4fff-ae6c-e282f69f5889]) returned with <nil> I0513 08:23:47.342415 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-f49521e7-371f-4fff-ae6c-e282f69f5889, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889) succeeded I0513 08:23:47.342441 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889 from node k8s-agentpool1-42137015-vmss000001 successfully I0513 08:23:47.342472 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=25.545795801 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 08:23:47.342493 1 utils.go:84] GRPC response: {} I0513 08:23:47.342575 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:23:47.405797 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032 lun 15 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 15})] I0513 08:23:47.405876 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 15})]) I0513 08:23:47.670461 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-20052deb-f67f-4045-8945-1162fedb5032:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20052deb-f67f-4045-8945-1162fedb5032 false 15})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:23:47.980050 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc attached to node k8s-agentpool1-42137015-vmss000000. I0513 08:23:47.980089 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc to node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:23:47.980119 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=58.53639482 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-38d28a4e-3986-4b80-89ef-d1c4af3e10fc" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:23:47.980132 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 08:23:47.980206 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf from node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf:pvc-2bce72bf-e50a-431b-b85c-e48301f8f7bf /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a699729d-bcf5-4770-89fa-830b3c743dfa:pvc-a699729d-bcf5-4770-89fa-830b3c743dfa /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787:pvc-c912beda-4a12-445a-8289-efed36ad2787] I0513 08:23:47.980287 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-c912beda-4a12-445a-8289-efed36ad2787 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c912beda-4a12-445a-8289-efed36ad2787 ... skipping 78 lines ... I0513 08:24:22.341156 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889) returned with <nil> I0513 08:24:22.341185 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.234628491 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f49521e7-371f-4fff-ae6c-e282f69f5889" result_code="succeeded" I0513 08:24:22.341201 1 utils.go:84] GRPC response: {} I0513 08:24:25.236701 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 08:24:25.236725 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867"} I0513 08:24:25.236823 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867) I0513 08:24:25.236864 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867) since it's in attaching or detaching state I0513 08:24:25.236923 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.3101e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867" result_code="failed" E0513 08:24:25.236939 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867) since it's in attaching or detaching state I0513 08:24:26.633434 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867:pvc-b9710dfd-b210-44ed-a366-90b5727c5867]) returned with <nil> I0513 08:24:26.633488 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-b9710dfd-b210-44ed-a366-90b5727c5867, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867) succeeded I0513 08:24:26.633500 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867 from node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:24:26.633529 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.34344012 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:24:26.633544 1 utils.go:84] GRPC response: {} I0513 08:24:27.335992 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume ... skipping 20 lines ... I0513 08:24:36.874435 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000002, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:24:36.912824 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8 to node k8s-agentpool1-42137015-vmss000002. I0513 08:24:36.989872 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab to node k8s-agentpool1-42137015-vmss000002 I0513 08:24:36.989927 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab false 0})] I0513 08:24:36.989969 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab false 0})]) I0513 08:24:36.990003 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8 to node k8s-agentpool1-42137015-vmss000002 I0513 08:24:37.221051 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:24:41.682148 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 08:24:41.682187 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-21a6052e-a4ed-4f79-99db-0ba1e47a98c9"} I0513 08:24:41.682334 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-21a6052e-a4ed-4f79-99db-0ba1e47a98c9 from node k8s-agentpool1-42137015-vmss000002 I0513 08:24:41.682366 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000002, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:24:47.393682 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab attached to node k8s-agentpool1-42137015-vmss000002. I0513 08:24:47.393723 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab to node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:24:47.393761 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.519349099 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7c554b0a-e7b3-4dc1-8d6b-332985951eab" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:24:47.393777 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 08:24:47.393806 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8 lun 1 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8 false 1})] I0513 08:24:47.393861 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8 false 1})]) I0513 08:24:47.722691 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0fdfa68b-98c0-425a-b97d-a44d9efd3ea8 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:24:51.980152 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 08:24:51.980184 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a"} I0513 08:24:51.980300 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a from node k8s-agentpool1-42137015-vmss000002 I0513 08:24:51.980326 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000002, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:24:57.238708 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 08:24:57.238733 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9710dfd-b210-44ed-a366-90b5727c5867"} ... skipping 96 lines ... I0513 08:25:18.688733 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5b7b818f-2e95-49e9-ada8-096597bd0955 lun 2 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5b7b818f-2e95-49e9-ada8-096597bd0955:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b7b818f-2e95-49e9-ada8-096597bd0955 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 false 3})] I0513 08:25:18.688801 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5b7b818f-2e95-49e9-ada8-096597bd0955:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b7b818f-2e95-49e9-ada8-096597bd0955 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 false 3})]) I0513 08:25:18.697789 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a I0513 08:25:18.697809 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a) returned with <nil> I0513 08:25:18.697837 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.295781601 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-964ea3c9-6ae9-49f2-b4c4-ff4d841ef55a" result_code="succeeded" I0513 08:25:18.697853 1 utils.go:84] GRPC response: {} I0513 08:25:18.923288 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5b7b818f-2e95-49e9-ada8-096597bd0955:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b7b818f-2e95-49e9-ada8-096597bd0955 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 false 4}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:25:23.509883 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 08:25:23.509909 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f"} I0513 08:25:23.509990 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3bda934a-3eb9-48ce-9164-9167700d1f0f) I0513 08:25:23.832488 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteSnapshot I0513 08:25:23.832514 1 utils.go:78] GRPC request: {"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-0bf29bcd-4b47-491a-bfda-7530bf244dee"} I0513 08:25:23.832627 1 controllerserver.go:899] begin to delete snapshot(snapshot-0bf29bcd-4b47-491a-bfda-7530bf244dee) under rg(kubetest-mfxpbga4) ... skipping 109 lines ... I0513 08:26:15.981293 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.406895049 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 08:26:15.981305 1 utils.go:84] GRPC response: {} I0513 08:26:15.981382 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:26:15.991585 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 08:26:15.991605 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43"} I0513 08:26:15.991700 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43 from node k8s-agentpool1-42137015-vmss000001 I0513 08:26:16.008490 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5b7b818f-2e95-49e9-ada8-096597bd0955:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5b7b818f-2e95-49e9-ada8-096597bd0955 false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-c9bfe07c-8910-42d1-9803-658bc8132f32:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c9bfe07c-8910-42d1-9803-658bc8132f32 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:26:16.054903 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-13d7c803-31d6-4727-94c9-45173afa8050 lun 14 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-13d7c803-31d6-4727-94c9-45173afa8050:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-13d7c803-31d6-4727-94c9-45173afa8050 false 14})] I0513 08:26:16.054978 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-13d7c803-31d6-4727-94c9-45173afa8050:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-13d7c803-31d6-4727-94c9-45173afa8050 false 14})]) I0513 08:26:16.306618 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-13d7c803-31d6-4727-94c9-45173afa8050:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-13d7c803-31d6-4727-94c9-45173afa8050 false 14})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:26:22.565481 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 08:26:22.565505 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43"} I0513 08:26:22.565593 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43) I0513 08:26:27.815034 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43 I0513 08:26:27.815062 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43) returned with <nil> I0513 08:26:27.815092 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.249484754 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43" result_code="succeeded" ... skipping 73 lines ... I0513 08:27:04.478348 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806) succeeded I0513 08:27:04.478374 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 from node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:27:04.478411 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=71.028641312 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:27:04.478427 1 utils.go:84] GRPC response: {} I0513 08:27:04.478442 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f false 0})] I0513 08:27:04.478489 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f false 0})]) I0513 08:27:04.706962 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:27:07.010894 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43:pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43]) returned with <nil> I0513 08:27:07.010946 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43) succeeded I0513 08:27:07.010960 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43 from node k8s-agentpool1-42137015-vmss000001 successfully I0513 08:27:07.010987 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=51.019269543 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1442eeaa-e6c5-43df-91f9-992e2dcfdd43" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 08:27:07.011000 1 utils.go:84] GRPC response: {} I0513 08:27:08.349422 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume ... skipping 3 lines ... I0513 08:27:08.405908 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 lun 2 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 false 2})] I0513 08:27:08.405936 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 false 2})]) I0513 08:27:08.487230 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 08:27:08.487261 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-6539cb58-3d58-4b27-bce6-99670d6dfbd0","parameters":{"csi.storage.k8s.io/pv/name":"pvc-6539cb58-3d58-4b27-bce6-99670d6dfbd0","csi.storage.k8s.io/pvc/name":"volume-limits-exceeded-fddg6-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-736"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 08:27:08.487412 1 controllerserver.go:174] begin to create azure disk(pvc-6539cb58-3d58-4b27-bce6-99670d6dfbd0) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 08:27:08.487433 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-6539cb58-3d58-4b27-bce6-99670d6dfbd0 StorageAccountType:StandardSSD_LRS Size:5 I0513 08:27:08.618266 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:27:08.648625 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 08:27:08.648648 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88","csi.storage.k8s.io/pvc/name":"test.csi.azure.comb9vg4","csi.storage.k8s.io/pvc/namespace":"multivolume-2678","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88"} I0513 08:27:08.689358 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 to node k8s-agentpool1-42137015-vmss000000. I0513 08:27:08.689406 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 08:27:08.790784 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 to node k8s-agentpool1-42137015-vmss000000 I0513 08:27:10.943659 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-6539cb58-3d58-4b27-bce6-99670d6dfbd0 StorageAccountType:StandardSSD_LRS Size:5 ... skipping 13 lines ... I0513 08:27:18.734146 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 attached to node k8s-agentpool1-42137015-vmss000000. I0513 08:27:18.734181 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 08:27:18.734231 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.32838558 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-9530f0d3-7cae-4ade-8a56-0d2d1b317806" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 08:27:18.734246 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"2"}} I0513 08:27:18.734282 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 lun 3 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 false 3})] I0513 08:27:18.734332 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 false 3})]) I0513 08:27:18.926123 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d5e857f7-d7c5-48a4-b7d8-83c03adbdf88 false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 08:27:19.941486 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f attached to node k8s-agentpool1-42137015-vmss000002. I0513 08:27:19.941522 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f to node k8s-agentpool1-42137015-vmss000002 successfully I0513 08:27:19.941560 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=43.395143865 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e463b1b7-68e8-4a70-b84d-94ee48ea2f9f" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 08:27:19.941579 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 08:27:19.941624 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d lun 1 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[] I0513 08:27:19.941657 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-113808bd-c4b2-4eb2-8e56-9f8366f5ae0d attached to node k8s-agentpool1-42137015-vmss000002. ... skipping 5836 lines ... I0513 09:00:33.258646 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2 from node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:00:33.258687 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.402795642 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:00:33.258709 1 utils.go:84] GRPC response: {} I0513 09:00:33.293076 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:00:33.428794 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d lun 1 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af615fef-2dc3-45fc-b54e-763ced270b9d false 1})] I0513 09:00:33.428873 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af615fef-2dc3-45fc-b54e-763ced270b9d false 1})]) I0513 09:00:33.661909 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af615fef-2dc3-45fc-b54e-763ced270b9d false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:00:33.741469 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6 attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:00:33.741523 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:00:33.741576 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.635470356 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:00:33.741594 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:00:33.749022 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:00:33.749046 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3208e85c-9a1f-4212-bb44-157c8370daa6","csi.storage.k8s.io/pvc/name":"test.csi.azure.combcth8","csi.storage.k8s.io/pvc/namespace":"multivolume-7207","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6"} ... skipping 17 lines ... I0513 09:00:38.301346 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:00:38.301378 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a","csi.storage.k8s.io/pvc/name":"test.csi.azure.compqpcp","csi.storage.k8s.io/pvc/namespace":"multivolume-7207","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a"} I0513 09:00:38.352018 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a to node k8s-agentpool1-42137015-vmss000001. I0513 09:00:38.352082 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a to node k8s-agentpool1-42137015-vmss000001 I0513 09:00:38.352108 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a lun 1 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a false 1})] I0513 09:00:38.352143 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a false 1})]) I0513 09:00:38.634958 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:00:39.103174 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:00:39.103203 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-48cfa03d-c820-46cf-bd70-a2dae6d8cc38"} I0513 09:00:39.103280 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-48cfa03d-c820-46cf-bd70-a2dae6d8cc38) I0513 09:00:42.200566 1 azure_armclient.go:135] response is empty I0513 09:00:42.200613 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:00:42.200628 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:00:42.200672 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:00:42.200714 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000634417 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb" result_code="failed" E0513 09:00:42.200731 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:00:43.941478 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:00:43.941501 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2"} I0513 09:00:43.941579 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2) I0513 09:00:45.163975 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:00:45.164021 1 controllerserver.go:258] create azure disk(pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 kubernetes.io-created-for-pvc-name:test.csi.azure.comclzlk kubernetes.io-created-for-pvc-namespace:provisioning-2491]) successfully I0513 09:00:45.164070 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=12.049016231 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50" result_code="succeeded" ... skipping 39 lines ... I0513 09:00:58.178618 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a from node k8s-agentpool1-42137015-vmss000002 successfully I0513 09:00:58.178656 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.831135301 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 09:00:58.178672 1 utils.go:84] GRPC response: {} I0513 09:00:58.178760 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000002, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:00:58.284157 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 false 0})] I0513 09:00:58.284217 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 false 0})]) I0513 09:00:58.550824 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:00:58.755193 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:00:58.755229 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d"} I0513 09:00:58.755358 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d from node k8s-agentpool1-42137015-vmss000000 I0513 09:00:58.755407 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d from node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d:pvc-af615fef-2dc3-45fc-b54e-763ced270b9d] I0513 09:00:58.755440 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-af615fef-2dc3-45fc-b54e-763ced270b9d uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d I0513 09:00:58.755447 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d:pvc-af615fef-2dc3-45fc-b54e-763ced270b9d]) I0513 09:00:58.941828 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090054Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id '0abb65d5-8e3e-49a6-9cac-aa68fa8c4120', request correlation id '0abb65d5-8e3e-49a6-9cac-aa68fa8c4120'."}} I0513 09:00:58.941870 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:00:58.941891 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:00:58.941908 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:00:58.941952 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:00:58.941991 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000386597 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2" result_code="failed" E0513 09:00:58.942010 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:06.641550 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090106Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id 'ec450cdb-d8c5-4a58-bef6-7cf80317648f', request correlation id 'ec450cdb-d8c5-4a58-bef6-7cf80317648f'."}} I0513 09:01:06.641605 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:01:06.641643 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:06.641656 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:06.641670 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:06.641746 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000066898 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:01:06.641774 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:07.643397 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:01:07.643428 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","parameters":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 09:01:07.643563 1 controllerserver.go:174] begin to create azure disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:01:07.643584 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:01:08.931484 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:01:08.931509 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d"} ... skipping 31 lines ... I0513 09:01:14.224297 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:01:14.288044 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d I0513 09:01:14.288070 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d:pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d]) I0513 09:01:15.004038 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:01:15.004070 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d"} I0513 09:01:15.004165 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d) I0513 09:01:22.643537 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090117Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id 'fb898d03-269d-40a6-a8ea-18bf998ea45c', request correlation id 'fb898d03-269d-40a6-a8ea-18bf998ea45c'."}} I0513 09:01:22.643573 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:01:22.643607 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:01:22.643621 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:01:22.643633 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:01:22.643699 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000094971 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:01:22.643720 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:01:24.619602 1 azure_diskclient.go:138] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: { "error": { "code": "NotFound", "message": "Disk pvc-c69036cb-66be-43e9-9809-4c43049c75cb is not found." } } I0513 09:01:24.619694 1 azure_managedDiskController.go:285] azureDisk - disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb) is already deleted I0513 09:01:24.619715 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-c69036cb-66be-43e9-9809-4c43049c75cb) returned with <nil> ... skipping 30 lines ... I0513 09:01:29.629538 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d"} I0513 09:01:29.629624 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d from node k8s-agentpool1-42137015-vmss000000 I0513 09:01:29.629647 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:01:29.742076 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d from node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d:pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d] E0513 09:01:29.742121 1 azure_controller_vmss.go:171] detach azure disk on node(k8s-agentpool1-42137015-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d:pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d]) not found I0513 09:01:29.742130 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d:pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d]) I0513 09:01:30.004258 1 azure_armclient.go:289] Received error in WaitForCompletionRef: 'Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded' I0513 09:01:30.004313 1 azure_armclient.go:658] Received error in delete.wait: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d, error: %!s(<nil>) I0513 09:01:30.004373 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 0, RawError: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded I0513 09:01:30.004414 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000223515 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d" result_code="failed" E0513 09:01:30.004440 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 0, RawError: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded I0513 09:01:30.943054 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:01:30.943078 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2"} I0513 09:01:30.943170 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2) I0513 09:01:31.115020 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:01:31.115045 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d"} I0513 09:01:31.115135 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d) I0513 09:01:31.115170 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d) since it's in attaching or detaching state I0513 09:01:31.115224 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.11e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d" result_code="failed" E0513 09:01:31.115238 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d) since it's in attaching or detaching state I0513 09:01:31.688732 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:01:31.688763 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77","parameters":{"csi.storage.k8s.io/pv/name":"pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77","csi.storage.k8s.io/pvc/name":"test.csi.azure.comznxr7","csi.storage.k8s.io/pvc/namespace":"provisioning-1870"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 09:01:31.688928 1 controllerserver.go:174] begin to create azure disk(pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:01:31.688946 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:01:38.226446 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:01:38.226498 1 controllerserver.go:258] create azure disk(pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 kubernetes.io-created-for-pvc-name:test.csi.azure.comznxr7 kubernetes.io-created-for-pvc-namespace:provisioning-1870]) successfully I0513 09:01:38.226537 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.537572345 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77" result_code="succeeded" I0513 09:01:38.226561 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77","csi.storage.k8s.io/pvc/name":"test.csi.azure.comznxr7","csi.storage.k8s.io/pvc/namespace":"provisioning-1870","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77"}} I0513 09:01:39.645462 1 azure_armclient.go:135] response is empty I0513 09:01:39.645502 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:01:39.645514 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:01:39.645524 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:01:39.645583 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000396207 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:01:39.645606 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:01:40.599724 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:01:40.599766 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77","csi.storage.k8s.io/pvc/name":"test.csi.azure.comznxr7","csi.storage.k8s.io/pvc/namespace":"provisioning-1870","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77"} I0513 09:01:40.625258 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 to node k8s-agentpool1-42137015-vmss000001. I0513 09:01:40.625304 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 to node k8s-agentpool1-42137015-vmss000001 I0513 09:01:43.646248 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:01:43.646277 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","parameters":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 09:01:43.646412 1 controllerserver.go:174] begin to create azure disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:01:43.646436 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:01:45.943189 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090141Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id '708f8a2c-5ba8-418a-a6b1-9c08b0afaf2e', request correlation id '708f8a2c-5ba8-418a-a6b1-9c08b0afaf2e'."}} I0513 09:01:45.943264 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:01:45.943293 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:45.943308 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:45.943355 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:45.943396 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000201362 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2" result_code="failed" E0513 09:01:45.943424 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:01:46.006270 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:01:46.006300 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d"} I0513 09:01:46.006398 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d) I0513 09:01:46.033472 1 azure_diskclient.go:138] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d' under resource group 'kubetest-mfxpbga4' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}} I0513 09:01:46.033536 1 azure_managedDiskController.go:285] azureDisk - disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d) is already deleted I0513 09:01:46.033555 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d) returned with <nil> I0513 09:01:46.033606 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.027184319 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-af615fef-2dc3-45fc-b54e-763ced270b9d" result_code="succeeded" I0513 09:01:46.033630 1 utils.go:84] GRPC response: {} I0513 09:01:58.646434 1 azure_armclient.go:135] response is empty I0513 09:01:58.646551 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:01:58.646570 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:01:58.646581 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:01:58.646641 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000184923 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:01:58.646671 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:01:59.907219 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a:pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a]) returned with <nil> I0513 09:01:59.907279 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a) succeeded I0513 09:01:59.907302 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a from node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:01:59.907330 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=45.999801758 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:01:59.907342 1 utils.go:84] GRPC response: {} I0513 09:01:59.907409 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6 from node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6:pvc-3208e85c-9a1f-4212-bb44-157c8370daa6] ... skipping 25 lines ... I0513 09:02:05.644091 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6 from node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:02:05.644121 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=36.206331204 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:02:05.644141 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:02:05.644142 1 utils.go:84] GRPC response: {} I0513 09:02:05.788462 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 false 0})] I0513 09:02:05.788517 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 false 0})]) I0513 09:02:06.054889 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:02:06.648263 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:02:06.648299 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","parameters":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 09:02:06.648475 1 controllerserver.go:174] begin to create azure disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:02:06.648498 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:02:08.561914 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50:pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50]) returned with <nil> I0513 09:02:08.561972 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) succeeded I0513 09:02:08.561991 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50 from node k8s-agentpool1-42137015-vmss000002 successfully I0513 09:02:08.562029 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.404260221 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 09:02:08.562047 1 utils.go:84] GRPC response: {} I0513 09:02:17.153038 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:02:17.153068 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a"} I0513 09:02:17.153164 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a) I0513 09:02:18.117220 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090213Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id '48131c0f-971d-4694-ad37-936b6e8ba678', request correlation id '48131c0f-971d-4694-ad37-936b6e8ba678'."}} I0513 09:02:18.117257 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:02:18.117286 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:18.117300 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:18.117339 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:18.117367 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000471696 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d" result_code="failed" E0513 09:02:18.117392 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:21.348657 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:02:21.348697 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:02:21.348729 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=40.72345764 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:02:21.348743 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:02:21.648796 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090221Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id 'de03b3c3-3135-472e-bcff-5448fa0b471a', request correlation id 'de03b3c3-3135-472e-bcff-5448fa0b471a'."}} I0513 09:02:21.648829 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:02:21.648885 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:21.648899 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:21.648909 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:21.648959 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000444002 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:02:21.648984 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 09:02:27.571223 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:02:27.571252 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50"} I0513 09:02:27.571338 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) I0513 09:02:31.638096 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a I0513 09:02:31.638128 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a) returned with <nil> I0513 09:02:31.638157 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=14.484976879 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-6024b07a-6bfe-4a09-a8e8-4ee64c1b3d4a" result_code="succeeded" ... skipping 10 lines ... I0513 09:02:36.167614 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6) I0513 09:02:37.649998 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:02:37.650025 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","parameters":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 09:02:37.650153 1 controllerserver.go:174] begin to create azure disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:02:37.650177 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:02:42.571981 1 azure_armclient.go:135] response is empty I0513 09:02:42.572022 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:02:42.572037 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:02:42.572078 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:02:42.572140 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.00077656 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50" result_code="failed" E0513 09:02:42.572157 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:02:49.944337 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:02:49.944364 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2"} I0513 09:02:49.944459 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2) I0513 09:02:50.733835 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77]) returned with <nil> I0513 09:02:50.733892 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77) succeeded I0513 09:02:50.733904 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 from node k8s-agentpool1-42137015-vmss000001 successfully ... skipping 3 lines ... I0513 09:02:50.762467 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77"} I0513 09:02:50.762566 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 from node k8s-agentpool1-42137015-vmss000001 I0513 09:02:50.762588 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:02:50.863523 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77 from node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77] E0513 09:02:50.863573 1 azure_controller_vmss.go:171] detach azure disk on node(k8s-agentpool1-42137015-vmss000001): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77]) not found I0513 09:02:50.863581 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77]) I0513 09:02:51.167545 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T090245Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id '7a1fd2f6-1727-4d20-a95d-5d37d555201f', request correlation id '7a1fd2f6-1727-4d20-a95d-5d37d555201f'."}} I0513 09:02:51.167598 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 09:02:51.167630 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:02:51.167642 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:02:51.167675 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:02:51.167704 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000074366 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6" result_code="failed" E0513 09:02:51.167723 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context deadline exceeded I0513 09:02:52.168302 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:02:52.168327 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6"} I0513 09:02:52.168405 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6) I0513 09:02:52.582889 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6 I0513 09:02:52.582920 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6) returned with <nil> I0513 09:02:52.582944 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.414525105 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-3208e85c-9a1f-4212-bb44-157c8370daa6" result_code="succeeded" I0513 09:02:52.582958 1 utils.go:84] GRPC response: {} I0513 09:02:52.650211 1 azure_armclient.go:289] Received error in WaitForCompletionRef: 'context canceled' I0513 09:02:52.650240 1 azure_armclient.go:310] Received error in WaitForAsyncOperationCompletion: 'context canceled' I0513 09:02:52.650250 1 azure_armclient.go:520] Received error in WaitForAsyncOperationResult: 'context canceled', no response I0513 09:02:52.650265 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:02:52.650337 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000143867 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:02:52.650357 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:02:55.303441 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2 I0513 09:02:55.303473 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2) returned with <nil> I0513 09:02:55.303499 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.359027358 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-2069b069-318e-4401-a24b-090e0cf714a2" result_code="succeeded" I0513 09:02:55.303514 1 utils.go:84] GRPC response: {} I0513 09:02:56.257364 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77:pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77]) returned with <nil> I0513 09:02:56.257421 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77) succeeded ... skipping 5 lines ... I0513 09:02:58.297575 1 controllerserver.go:174] begin to create azure disk(pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:02:58.297605 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:03:02.999496 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:03:02.999527 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77"} I0513 09:03:02.999617 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77) I0513 09:03:13.297876 1 azure_armclient.go:135] response is empty I0513 09:03:13.297944 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:03:13.297965 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:03:13.297982 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:03:13.298031 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000412642 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:03:13.298055 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 09:03:14.300527 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:03:14.300552 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868","parameters":{"csi.storage.k8s.io/pv/name":"pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2xhvq","csi.storage.k8s.io/pvc/namespace":"multivolume-2508"},"volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":7}}]} I0513 09:03:14.300697 1 controllerserver.go:174] begin to create azure disk(pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:03:14.300715 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:03:16.669945 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:03:16.669992 1 controllerserver.go:258] create azure disk(pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 kubernetes.io-created-for-pvc-name:test.csi.azure.com2xhvq kubernetes.io-created-for-pvc-namespace:multivolume-2508]) successfully I0513 09:03:16.670035 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.369297742 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868" result_code="succeeded" I0513 09:03:16.670049 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2xhvq","csi.storage.k8s.io/pvc/namespace":"multivolume-2508","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868"}} I0513 09:03:17.883730 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:03:17.883755 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2xhvq","csi.storage.k8s.io/pvc/namespace":"multivolume-2508","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868"} I0513 09:03:17.910394 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 to node k8s-agentpool1-42137015-vmss000001. I0513 09:03:17.910435 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:03:17.999514 1 azure_armclient.go:135] response is empty I0513 09:03:17.999606 1 azure_armclient.go:697] Received error in deleteAsync.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:03:17.999674 1 azure_armclient.go:649] Received error in delete.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:03:17.999763 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:03:17.999853 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000173775 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-4327775d-ec0a-4f1e-ae95-aff4cde48b77" result_code="failed" E0513 09:03:17.999891 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:03:18.024367 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 to node k8s-agentpool1-42137015-vmss000001 I0513 09:03:18.024414 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 false 0})] I0513 09:03:18.024435 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 false 0})]) I0513 09:03:18.428278 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:03:22.118887 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:03:22.118936 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d"} I0513 09:03:22.119089 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-646a32b4-01c4-42ca-a1ca-5cbd9d87c89d) I0513 09:03:24.651685 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:03:24.651713 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","parameters":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 09:03:24.651852 1 controllerserver.go:174] begin to create azure disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) ... skipping 13 lines ... I0513 09:03:32.369191 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:03:32.369226 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67"} I0513 09:03:32.419870 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 to node k8s-agentpool1-42137015-vmss000002. I0513 09:03:32.419923 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 to node k8s-agentpool1-42137015-vmss000002 I0513 09:03:32.419947 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 false 0})] I0513 09:03:32.419971 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 false 0})]) I0513 09:03:32.715027 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:03:33.596370 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:03:33.596410 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:03:33.596444 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.686036592 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:03:33.596458 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:03:33.604353 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:03:33.604376 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2xhvq","csi.storage.k8s.io/pvc/namespace":"multivolume-2508","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868"} ... skipping 10 lines ... I0513 09:03:42.155139 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:03:42.155163 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e","csi.storage.k8s.io/pvc/name":"test.csi.azure.com5z5sv","csi.storage.k8s.io/pvc/namespace":"multivolume-6963","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e"} I0513 09:03:42.218945 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e to node k8s-agentpool1-42137015-vmss000000. I0513 09:03:42.219058 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e to node k8s-agentpool1-42137015-vmss000000 I0513 09:03:42.219162 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e false 0})] I0513 09:03:42.219429 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e false 0})]) I0513 09:03:42.453220 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:03:42.851814 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 attached to node k8s-agentpool1-42137015-vmss000002. I0513 09:03:42.851857 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 to node k8s-agentpool1-42137015-vmss000002 successfully I0513 09:03:42.851898 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.432012038 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 09:03:42.851914 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:03:42.859329 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:03:42.859352 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67","csi.storage.k8s.io/pvc/name":"test.csi.azure.commzgt4","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67"} ... skipping 2 lines ... I0513 09:03:42.883865 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 is already attached to node k8s-agentpool1-42137015-vmss000002 at lun 0. I0513 09:03:42.883904 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=8.79e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 09:03:42.883933 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:03:46.573331 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:03:46.573354 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50"} I0513 09:03:46.573443 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) I0513 09:03:46.609770 1 azure_diskclient.go:138] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50' under resource group 'kubetest-mfxpbga4' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}} I0513 09:03:46.609848 1 azure_managedDiskController.go:285] azureDisk - disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) is already deleted I0513 09:03:46.609858 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50) returned with <nil> I0513 09:03:46.609893 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.036432174 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fc63cf4e-2d4d-4157-90f2-07fca152df50" result_code="succeeded" I0513 09:03:46.609912 1 utils.go:84] GRPC response: {} I0513 09:03:46.701278 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:03:46.701299 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868"} ... skipping 66 lines ... I0513 09:04:07.267399 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 from node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:04:07.267441 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.565996899 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:04:07.267463 1 utils.go:84] GRPC response: {} I0513 09:04:07.267532 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:04:07.389171 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 lun 1 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be false 0})] I0513 09:04:07.389228 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be false 0})]) I0513 09:04:07.621025 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:04:07.731071 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e attached to node k8s-agentpool1-42137015-vmss000000. I0513 09:04:07.731122 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e to node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:04:07.731170 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=25.51220391 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:04:07.731194 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:04:08.560155 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:04:08.560180 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2xhvq","csi.storage.k8s.io/pvc/namespace":"multivolume-2508","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868"} I0513 09:04:08.595347 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 to node k8s-agentpool1-42137015-vmss000000. I0513 09:04:08.595406 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 to node k8s-agentpool1-42137015-vmss000000 I0513 09:04:08.595440 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 lun 1 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 false 1})] I0513 09:04:08.595479 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 false 1})]) I0513 09:04:08.809443 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:04:12.953531 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-ce00543b-25d4-4365-a335-ef8705ff5458 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:04:12.953584 1 controllerserver.go:258] create azure disk(pvc-ce00543b-25d4-4365-a335-ef8705ff5458) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-ce00543b-25d4-4365-a335-ef8705ff5458 kubernetes.io-created-for-pvc-name:test.csi.azure.comt999n kubernetes.io-created-for-pvc-namespace:provisioning-8960]) successfully I0513 09:04:12.953623 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.294885451 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458" result_code="succeeded" I0513 09:04:12.953636 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ce00543b-25d4-4365-a335-ef8705ff5458","csi.storage.k8s.io/pvc/name":"test.csi.azure.comt999n","csi.storage.k8s.io/pvc/namespace":"provisioning-8960","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458"}} I0513 09:04:13.514050 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:04:13.514076 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ce00543b-25d4-4365-a335-ef8705ff5458","csi.storage.k8s.io/pvc/name":"test.csi.azure.comt999n","csi.storage.k8s.io/pvc/namespace":"provisioning-8960","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458"} I0513 09:04:13.539529 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458 to node k8s-agentpool1-42137015-vmss000002. I0513 09:04:13.539589 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458 to node k8s-agentpool1-42137015-vmss000002 I0513 09:04:13.539625 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458 lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ce00543b-25d4-4365-a335-ef8705ff5458 false 0})] I0513 09:04:13.539657 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ce00543b-25d4-4365-a335-ef8705ff5458 false 0})]) I0513 09:04:13.741255 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ce00543b-25d4-4365-a335-ef8705ff5458 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:04:14.899371 1 controllerserver.go:860] create snapshot(snapshot-a2700a94-7297-4c1a-a2ed-dc379da64682) under rg(kubetest-mfxpbga4) successfully I0513 09:04:14.938730 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=22.047725453 request="azuredisk_csi_driver_controller_create_snapshot" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" source_resource_id="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67" snapshot_name="snapshot-a2700a94-7297-4c1a-a2ed-dc379da64682" result_code="succeeded" I0513 09:04:14.938791 1 utils.go:84] GRPC response: {"snapshot":{"creation_time":{"nanos":58219100,"seconds":1652432643},"ready_to_use":true,"size_bytes":5368709120,"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-a2700a94-7297-4c1a-a2ed-dc379da64682","source_volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67"}} I0513 09:04:17.026202 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerGetCapabilities I0513 09:04:17.026230 1 utils.go:78] GRPC request: {} I0513 09:04:17.026283 1 utils.go:84] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":13}}}]} ... skipping 20 lines ... I0513 09:04:27.791480 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:04:27.791523 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:04:27.791567 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=22.716499646 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:04:27.791604 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 false 2})] I0513 09:04:27.791585 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 09:04:27.791643 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 false 2})]) I0513 09:04:28.030007 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:04:29.858461 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:04:29.858490 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-68703498-d254-4f6c-97c6-521511d90b86","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2xhvq-cloned","csi.storage.k8s.io/pvc/namespace":"multivolume-2508","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86"} I0513 09:04:29.895768 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86 to node k8s-agentpool1-42137015-vmss000000. I0513 09:04:29.895829 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86 to node k8s-agentpool1-42137015-vmss000000 I0513 09:04:29.895854 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86 lun 2 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-68703498-d254-4f6c-97c6-521511d90b86 false 2})] I0513 09:04:29.895886 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-68703498-d254-4f6c-97c6-521511d90b86 false 2})]) I0513 09:04:30.213062 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-68703498-d254-4f6c-97c6-521511d90b86 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:04:31.261852 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:04:31.261878 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458"} I0513 09:04:31.261976 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458 from node k8s-agentpool1-42137015-vmss000002 I0513 09:04:31.262036 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458 from node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458:pvc-ce00543b-25d4-4365-a335-ef8705ff5458] I0513 09:04:31.262063 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-ce00543b-25d4-4365-a335-ef8705ff5458 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458 I0513 09:04:31.262069 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ce00543b-25d4-4365-a335-ef8705ff5458:pvc-ce00543b-25d4-4365-a335-ef8705ff5458]) ... skipping 65 lines ... I0513 09:05:03.157863 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb","csi.storage.k8s.io/pvc/name":"test.csi.azure.comm5fmn","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-6326","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb"} I0513 09:05:03.217686 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb to node k8s-agentpool1-42137015-vmss000001. I0513 09:05:03.217749 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb to node k8s-agentpool1-42137015-vmss000001 I0513 09:05:04.764928 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:05:04.764961 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86"} I0513 09:05:04.765104 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-68703498-d254-4f6c-97c6-521511d90b86 from node k8s-agentpool1-42137015-vmss000000 I0513 09:05:11.649804 1 azure_armclient.go:289] Received error in WaitForCompletionRef: 'context deadline exceeded' I0513 09:05:11.649864 1 azure_armclient.go:310] Received error in WaitForAsyncOperationCompletion: 'context deadline exceeded' I0513 09:05:11.649875 1 azure_armclient.go:520] Received error in WaitForAsyncOperationResult: 'context deadline exceeded', no response I0513 09:05:11.649891 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:05:11.649956 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=14.999770421000001 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 09:05:11.649981 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context deadline exceeded I0513 09:05:12.663416 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:05:12.663445 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-1358d8de-c2f6-4a8f-8926-42df774718a2","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1358d8de-c2f6-4a8f-8926-42df774718a2","csi.storage.k8s.io/pvc/name":"pvc-ng95w","csi.storage.k8s.io/pvc/namespace":"snapshotting-7330"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/snapshots/snapshot-a2700a94-7297-4c1a-a2ed-dc379da64682"}}}} I0513 09:05:12.663636 1 controllerserver.go:174] begin to create azure disk(pvc-1358d8de-c2f6-4a8f-8926-42df774718a2) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) diskZone() maxShares(0) I0513 09:05:12.663663 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:05:13.589941 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67]) returned with <nil> I0513 09:05:13.590007 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) succeeded ... skipping 3 lines ... I0513 09:05:13.590173 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:05:13.602659 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:05:13.602686 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67"} I0513 09:05:13.602792 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 from node k8s-agentpool1-42137015-vmss000001 I0513 09:05:13.713093 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb lun 2 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb false 2})] I0513 09:05:13.713145 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb false 2})]) I0513 09:05:13.952560 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d096273-3081-4d73-a5b8-a73e1c49e6cb false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:05:14.948436 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:05:14.948466 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868"} I0513 09:05:14.948591 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e2f22403-c5a1-47b6-80d0-da7fc3492868 from node k8s-agentpool1-42137015-vmss000000 I0513 09:05:15.053852 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 StorageAccountType:StandardSSD_LRS Size:5 I0513 09:05:15.053913 1 controllerserver.go:258] create azure disk(pvc-1358d8de-c2f6-4a8f-8926-42df774718a2) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 kubernetes.io-created-for-pvc-name:pvc-ng95w kubernetes.io-created-for-pvc-namespace:snapshotting-7330]) successfully I0513 09:05:15.053967 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.390277799 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2" result_code="succeeded" ... skipping 48 lines ... I0513 09:05:52.187354 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e) succeeded I0513 09:05:52.187390 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e from node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:05:52.187418 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=36.250981968 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-5c42830b-ab97-4be2-8a67-67db52cd0f9e" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:05:52.187429 1 utils.go:84] GRPC response: {} I0513 09:05:52.187500 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 false 0})] I0513 09:05:52.187538 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 false 0})]) I0513 09:05:52.420738 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:06:07.615473 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 attached to node k8s-agentpool1-42137015-vmss000000. I0513 09:06:07.615515 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:06:07.615547 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=42.650193078 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:06:07.615560 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:06:09.722683 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-36d655b2-986e-4c4b-a08f-dddbcb395652:pvc-36d655b2-986e-4c4b-a08f-dddbcb395652 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67:pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be:pvc-a39a1dd9-1a3e-44f8-8199-613bea8d14be]) returned with <nil> I0513 09:06:09.722743 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-445666f7-25b4-4957-a493-9fd7cf0ddd67) succeeded ... skipping 103 lines ... I0513 09:07:06.895170 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-28f1b526-3724-4796-a501-fbb23982653d","csi.storage.k8s.io/pvc/name":"test.csi.azure.comzrkfb","csi.storage.k8s.io/pvc/namespace":"provisioning-6286","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d"} I0513 09:07:06.929547 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d to node k8s-agentpool1-42137015-vmss000001. I0513 09:07:06.929595 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:07:07.030077 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d to node k8s-agentpool1-42137015-vmss000001 I0513 09:07:07.030133 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-28f1b526-3724-4796-a501-fbb23982653d false 0})] I0513 09:07:07.030157 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-28f1b526-3724-4796-a501-fbb23982653d false 0})]) I0513 09:07:07.199899 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-28f1b526-3724-4796-a501-fbb23982653d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:07:08.686141 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:07:08.686167 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1f5df22c-e620-48f8-be16-885720f30b0f","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2n55g","csi.storage.k8s.io/pvc/namespace":"volumeio-9364","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f"} I0513 09:07:08.756421 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f to node k8s-agentpool1-42137015-vmss000000. I0513 09:07:08.756479 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f to node k8s-agentpool1-42137015-vmss000000 I0513 09:07:17.351724 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:07:17.351759 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d to node k8s-agentpool1-42137015-vmss000001 successfully ... skipping 7 lines ... I0513 09:07:17.981883 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:07:17.995841 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:07:17.995865 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2"} I0513 09:07:17.995982 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 from node k8s-agentpool1-42137015-vmss000000 I0513 09:07:18.093625 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1f5df22c-e620-48f8-be16-885720f30b0f false 0})] I0513 09:07:18.093687 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1f5df22c-e620-48f8-be16-885720f30b0f false 0})]) I0513 09:07:18.392426 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1f5df22c-e620-48f8-be16-885720f30b0f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:07:18.924499 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:07:18.924532 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2"} I0513 09:07:18.924623 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2) I0513 09:07:24.161430 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2 I0513 09:07:24.161491 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2) returned with <nil> I0513 09:07:24.161548 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.23688793 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1358d8de-c2f6-4a8f-8926-42df774718a2" result_code="succeeded" ... skipping 37 lines ... I0513 09:08:04.618111 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f from node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:pvc-1f5df22c-e620-48f8-be16-885720f30b0f] E0513 09:08:04.618160 1 azure_controller_vmss.go:171] detach azure disk on node(k8s-agentpool1-42137015-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:pvc-1f5df22c-e620-48f8-be16-885720f30b0f]) not found I0513 09:08:04.618169 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:pvc-1f5df22c-e620-48f8-be16-885720f30b0f]) I0513 09:08:05.087840 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:08:05.087871 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f"} I0513 09:08:05.087986 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) I0513 09:08:05.088005 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) since it's in attaching or detaching state I0513 09:08:05.088048 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=3.16e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f" result_code="failed" E0513 09:08:05.088064 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) since it's in attaching or detaching state I0513 09:08:06.089481 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:08:06.089515 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f"} I0513 09:08:06.089609 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) I0513 09:08:06.089622 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) since it's in attaching or detaching state I0513 09:08:06.089680 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=3.9201e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f" result_code="failed" E0513 09:08:06.089702 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) since it's in attaching or detaching state I0513 09:08:08.091835 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:08:08.091892 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f"} I0513 09:08:08.092035 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) I0513 09:08:08.092070 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) since it's in attaching or detaching state I0513 09:08:08.092156 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.58e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f" result_code="failed" E0513 09:08:08.092189 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) since it's in attaching or detaching state I0513 09:08:09.869488 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f:pvc-1f5df22c-e620-48f8-be16-885720f30b0f]) returned with <nil> I0513 09:08:09.869547 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-1f5df22c-e620-48f8-be16-885720f30b0f, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f) succeeded I0513 09:08:09.869561 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f from node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:08:09.869595 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.367294018 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-1f5df22c-e620-48f8-be16-885720f30b0f" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:08:09.869612 1 utils.go:84] GRPC response: {} I0513 09:08:12.093594 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume ... skipping 15 lines ... I0513 09:08:26.849244 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046","csi.storage.k8s.io/pvc/name":"test.csi.azure.comv9s46","csi.storage.k8s.io/pvc/namespace":"multivolume-3162","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046"} I0513 09:08:26.881922 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 to node k8s-agentpool1-42137015-vmss000000. I0513 09:08:26.881968 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:08:27.003278 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 to node k8s-agentpool1-42137015-vmss000000 I0513 09:08:27.003339 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 false 0})] I0513 09:08:27.003369 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 false 0})]) I0513 09:08:27.375900 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:08:42.546579 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 attached to node k8s-agentpool1-42137015-vmss000000. I0513 09:08:42.546632 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:08:42.546678 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.664732393 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:08:42.546698 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:09:02.472221 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 09:09:02.472247 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-28f1b526-3724-4796-a501-fbb23982653d"} ... skipping 48 lines ... I0513 09:09:48.099522 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:09:48.099549 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089","csi.storage.k8s.io/pvc/name":"inline-volume-tester-xkkn8-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-2588","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089"} I0513 09:09:48.125364 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089 to node k8s-agentpool1-42137015-vmss000001. I0513 09:09:48.125420 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089 to node k8s-agentpool1-42137015-vmss000001 I0513 09:09:48.125446 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089 lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089 false 0})] I0513 09:09:48.125477 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089 false 0})]) I0513 09:09:48.403516 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b9ef4710-586d-4375-93ba-6ef6d10d1089 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:09:50.485835 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 09:09:50.485864 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046"} I0513 09:09:50.485947 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046) I0513 09:09:55.923112 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046 I0513 09:09:55.923151 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046) returned with <nil> I0513 09:09:55.923192 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.43722084 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-26006b16-3fac-4fb7-a32f-3f1911ee1046" result_code="succeeded" ... skipping 18 lines ... I0513 09:10:06.718373 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e467c742-e267-43dc-b55d-6227bb601224","csi.storage.k8s.io/pvc/name":"test.csi.azure.coml4wq9","csi.storage.k8s.io/pvc/namespace":"volumemode-6563","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224"} I0513 09:10:06.754572 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224 to node k8s-agentpool1-42137015-vmss000000. I0513 09:10:06.754614 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:10:06.860912 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224 to node k8s-agentpool1-42137015-vmss000000 I0513 09:10:06.860972 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224 lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e467c742-e267-43dc-b55d-6227bb601224 false 0})] I0513 09:10:06.860997 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e467c742-e267-43dc-b55d-6227bb601224 false 0})]) I0513 09:10:07.110452 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e467c742-e267-43dc-b55d-6227bb601224 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:10:08.525593 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba StorageAccountType:StandardSSD_LRS Size:5 I0513 09:10:08.525697 1 controllerserver.go:258] create azure disk(pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba) account type(StandardSSD_LRS) rg(kubetest-mfxpbga4) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba kubernetes.io-created-for-pvc-name:inline-volume-tester2-99xlt-my-volume-0 kubernetes.io-created-for-pvc-namespace:ephemeral-2588]) successfully I0513 09:10:08.526515 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.446491022 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba" result_code="succeeded" I0513 09:10:08.526731 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba","csi.storage.k8s.io/pvc/name":"inline-volume-tester2-99xlt-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-2588","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba"}} I0513 09:10:09.138633 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:10:09.138663 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba","csi.storage.k8s.io/pvc/name":"inline-volume-tester2-99xlt-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-2588","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba"} I0513 09:10:09.188821 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba to node k8s-agentpool1-42137015-vmss000002. I0513 09:10:09.188917 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba to node k8s-agentpool1-42137015-vmss000002 I0513 09:10:09.188943 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba false 0})] I0513 09:10:09.188969 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba false 0})]) I0513 09:10:09.383311 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-cf9eb154-d353-4fd2-a11a-b8eead16b0ba false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:10:37.593848 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224 attached to node k8s-agentpool1-42137015-vmss000000. I0513 09:10:37.593890 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:10:37.593925 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=30.839336253 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:10:37.593942 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:10:37.599986 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:10:37.600004 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e467c742-e267-43dc-b55d-6227bb601224","csi.storage.k8s.io/pvc/name":"test.csi.azure.coml4wq9","csi.storage.k8s.io/pvc/namespace":"volumemode-6563","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e467c742-e267-43dc-b55d-6227bb601224"} ... skipping 109 lines ... I0513 09:13:16.837326 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:13:16.858940 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da to node k8s-agentpool1-42137015-vmss000001. I0513 09:13:16.906892 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 to node k8s-agentpool1-42137015-vmss000001 I0513 09:13:16.906932 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 false 0})] I0513 09:13:16.906955 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 false 0})]) I0513 09:13:16.907053 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da to node k8s-agentpool1-42137015-vmss000001 I0513 09:13:17.150177 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:13:27.258845 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:13:27.258881 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:13:27.258917 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.421608009 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:13:27.258956 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:13:27.258933 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:13:27.268965 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume ... skipping 2 lines ... I0513 09:13:27.391879 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da lun 1 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da false 1})] I0513 09:13:27.391945 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da false 1})]) I0513 09:13:27.391981 1 azure_controller_common.go:453] azureDisk - find disk: lun 0 name pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 I0513 09:13:27.392006 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449 is already attached to node k8s-agentpool1-42137015-vmss000001 at lun 0. I0513 09:13:27.392044 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.07801138 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-e9330ec9-e5c8-434b-aa43-8ad4593bc449" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:13:27.392057 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:13:27.587322 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:13:37.744473 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:13:37.744511 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:13:37.744553 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.885593201 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:13:37.744573 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 09:13:37.755844 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:13:37.755865 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da","csi.storage.k8s.io/pvc/name":"test.csi.azure.com7krvh","csi.storage.k8s.io/pvc/namespace":"multivolume-1609","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-25cf1d12-6531-4fa4-9155-5d4d92a095da"} ... skipping 64 lines ... I0513 09:15:53.138401 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a","csi.storage.k8s.io/pvc/name":"test.csi.azure.comrl9xk","csi.storage.k8s.io/pvc/namespace":"volume-4234","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a"} I0513 09:15:53.163393 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a to node k8s-agentpool1-42137015-vmss000001. I0513 09:15:53.163452 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:15:53.256144 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a to node k8s-agentpool1-42137015-vmss000001 I0513 09:15:53.256199 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a false 0})] I0513 09:15:53.256223 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a false 0})]) I0513 09:15:53.453124 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:16:03.603140 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:16:03.603183 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:16:03.603229 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.439815629 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:16:03.603248 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:16:03.612664 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:16:03.612688 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a","csi.storage.k8s.io/pvc/name":"test.csi.azure.comrl9xk","csi.storage.k8s.io/pvc/namespace":"volume-4234","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-99213a25-3a02-4dee-8be5-7913c3fcae7a"} ... skipping 33 lines ... I0513 09:17:26.841620 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64"} I0513 09:17:26.888087 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 to node k8s-agentpool1-42137015-vmss000001. I0513 09:17:26.888135 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000001, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:17:26.994395 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 to node k8s-agentpool1-42137015-vmss000001 I0513 09:17:26.994444 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 lun 0 to node k8s-agentpool1-42137015-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 false 0})] I0513 09:17:26.994467 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 false 0})]) I0513 09:17:27.220409 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:17:37.362410 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:17:37.362463 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64 to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:17:37.362493 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.474398422 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:17:37.362508 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:17:37.373317 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:17:37.373335 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-ac5ce671-91fb-4ecc-9b1a-c42810087b64"} ... skipping 14 lines ... I0513 09:17:46.888417 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:17:46.888445 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-f53ad559-8693-462e-8a46-a15d52563d19","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19"} I0513 09:17:46.937565 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19 to node k8s-agentpool1-42137015-vmss000002. I0513 09:17:46.937621 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19 to node k8s-agentpool1-42137015-vmss000002 I0513 09:17:46.937645 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19 lun 0 to node k8s-agentpool1-42137015-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f53ad559-8693-462e-8a46-a15d52563d19 false 0})] I0513 09:17:46.937669 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f53ad559-8693-462e-8a46-a15d52563d19 false 0})]) I0513 09:17:47.121319 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f53ad559-8693-462e-8a46-a15d52563d19 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:17:57.288317 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19 attached to node k8s-agentpool1-42137015-vmss000002. I0513 09:17:57.288356 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19 to node k8s-agentpool1-42137015-vmss000002 successfully I0513 09:17:57.288389 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.350814866 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f53ad559-8693-462e-8a46-a15d52563d19" node="k8s-agentpool1-42137015-vmss000002" result_code="succeeded" I0513 09:17:57.288403 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:18:05.830657 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:18:05.830684 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a","parameters":{"csi.storage.k8s.io/pv/name":"pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","skuName":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} ... skipping 6 lines ... I0513 09:18:08.930403 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 09:18:08.930429 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a"} I0513 09:18:08.970128 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a to node k8s-agentpool1-42137015-vmss000000. I0513 09:18:08.970204 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a to node k8s-agentpool1-42137015-vmss000000 I0513 09:18:08.970247 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a lun 0 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a false 0})] I0513 09:18:08.970297 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a false 0})]) I0513 09:18:09.147999 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:18:19.344784 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a attached to node k8s-agentpool1-42137015-vmss000000. I0513 09:18:19.344915 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a to node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:18:19.344985 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.37482339 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-73a4176c-3672-43d5-b1a2-96f42c50d99a" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:18:19.345015 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 09:18:29.384379 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:18:29.384411 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2","parameters":{"csi.storage.k8s.io/pv/name":"pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2","csi.storage.k8s.io/pvc/name":"nginx-azuredisk-ephemeral-azuredisk01","csi.storage.k8s.io/pvc/namespace":"default","skuName":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} ... skipping 7 lines ... I0513 09:18:32.407248 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-42137015-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2","csi.storage.k8s.io/pvc/name":"nginx-azuredisk-ephemeral-azuredisk01","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652429995785-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2"} I0513 09:18:32.469418 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 to node k8s-agentpool1-42137015-vmss000000. I0513 09:18:32.469501 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:18:32.618150 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 to node k8s-agentpool1-42137015-vmss000000 I0513 09:18:32.618204 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 lun 1 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 false 1})] I0513 09:18:32.618237 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 false 1})]) I0513 09:18:33.981711 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:18:44.089645 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 attached to node k8s-agentpool1-42137015-vmss000000. I0513 09:18:44.089694 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2 to node k8s-agentpool1-42137015-vmss000000 successfully I0513 09:18:44.089741 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=11.620313898 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-fcd8b488-31cb-4b5a-8928-d5807965dcf2" node="k8s-agentpool1-42137015-vmss000000" result_code="succeeded" I0513 09:18:44.089759 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 09:18:51.402775 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 09:18:51.402797 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc","parameters":{"csi.storage.k8s.io/pv/name":"pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc","csi.storage.k8s.io/pvc/name":"daemonset-azuredisk-ephemeral-hwrcp-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","skuName":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} ... skipping 35 lines ... I0513 09:18:54.508711 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-afba2e46-797e-43aa-b8c1-5f48be87747d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-afba2e46-797e-43aa-b8c1-5f48be87747d false 1})]) I0513 09:18:54.512329 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3 to node k8s-agentpool1-42137015-vmss000000. I0513 09:18:54.512362 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-42137015-vmss000000, refreshing the cache(vmss: k8s-agentpool1-42137015-vmss, rg: kubetest-mfxpbga4) I0513 09:18:54.607396 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3 to node k8s-agentpool1-42137015-vmss000000 I0513 09:18:54.607450 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3 lun 2 to node k8s-agentpool1-42137015-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3 false 2})] I0513 09:18:54.607474 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3 false 2})]) I0513 09:18:54.722235 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:18:54.739359 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-afba2e46-797e-43aa-b8c1-5f48be87747d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-afba2e46-797e-43aa-b8c1-5f48be87747d false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:18:54.815252 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-mfxpbga4): vm(k8s-agentpool1-42137015-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-mfxpbga4/providers/microsoft.compute/disks/pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f03f5f57-4dab-4a56-83c3-f923fc062ad3 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 09:19:04.836184 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc attached to node k8s-agentpool1-42137015-vmss000001. I0513 09:19:04.836224 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc to node k8s-agentpool1-42137015-vmss000001 successfully I0513 09:19:04.836267 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.343606213 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-mfxpbga4" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-7edfbcce-3d1c-4b32-a19b-397a9d1d64cc" node="k8s-agentpool1-42137015-vmss000001" result_code="succeeded" I0513 09:19:04.836282 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 09:19:04.856004 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-afba2e46-797e-43aa-b8c1-5f48be87747d attached to node k8s-agentpool1-42137015-vmss000002. I0513 09:19:04.856038 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mfxpbga4/providers/Microsoft.Compute/disks/pvc-afba2e46-797e-43aa-b8c1-5f48be87747d to node k8s-agentpool1-42137015-vmss000002 successfully ... skipping 19 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 08:19:49.372939 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 08:19:49.373402 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 08:19:49.398337 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 08:19:49.398818 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 08:19:49.398833 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 08:19:49.398875 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 08:19:49.399577 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 08:19:49.399601 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 08:19:49.399607 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 08:19:49.399637 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for 0000414c-5950-4a10-a61f-5d202a75cd00. Invalid resource Id format I0513 08:19:49.399673 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 08:19:49.399712 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:49.399721 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:49.399737 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:49.399752 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:49.399774 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 3603 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 08:19:51.262614 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 08:19:51.262968 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 08:19:51.284511 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 08:19:51.284536 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 08:19:51.284545 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 08:19:51.284572 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 08:19:51.285136 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 08:19:51.285157 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 08:19:51.285164 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 08:19:51.285201 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for 0000414c-5950-4a10-a61f-5d202a75cd00. Invalid resource Id format I0513 08:19:51.285241 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 08:19:51.285527 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:51.285544 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:51.285564 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:51.285580 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:51.285608 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 2387 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 08:19:47.795763 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 08:19:47.796186 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 08:19:47.818423 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 08:19:47.818449 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 08:19:47.818459 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 08:19:47.818495 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 08:19:47.819094 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 08:19:47.819120 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 08:19:47.819127 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 08:19:47.819176 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for 0000414c-5950-4a10-a61f-5d202a75cd00. Invalid resource Id format I0513 08:19:47.819231 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 08:19:47.819295 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:47.819306 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:47.819327 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:47.819339 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:47.819361 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 2858 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 08:19:51.143366 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 08:19:51.143971 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 08:19:51.165615 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 08:19:51.165642 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 08:19:51.165652 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 08:19:51.165679 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 08:19:51.166462 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 08:19:51.166488 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 08:19:51.166495 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 08:19:51.166659 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for 0000414c-5950-4a10-a61f-5d202a75cd00. Invalid resource Id format I0513 08:19:51.166765 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 08:19:51.166881 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:51.166929 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:51.166951 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 08:19:51.166957 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 08:19:51.166978 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 49 lines ... I0513 08:19:52.197838 1 nodeserver.go:352] NodeGetInfo, nodeName: k8s-master-42137015-0, failureDomain: 0 I0513 08:19:52.197855 1 nodeserver.go:410] got a matching size in getMaxDataDiskCount, VM Size: STANDARD_D2S_V3, MaxDataDiskCount: 4 I0513 08:19:52.197867 1 utils.go:84] GRPC response: {"accessible_topology":{"segments":{"topology.test.csi.azure.com/zone":""}},"max_volumes_per_node":4,"node_id":"k8s-master-42137015-0"} print out csi-test-node-win logs ... ====================================================================================== No resources found in kube-system namespace. make: *** [Makefile:260: e2e-test] Error 1 2022/05/13 09:19:20 process.go:155: Step 'make e2e-test' finished in 1h9m23.377966581s 2022/05/13 09:19:20 aksengine_helpers.go:426: downloading /root/tmp4042645124/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2022/05/13 09:19:20 util.go:71: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2022/05/13 09:19:21 process.go:153: Running: chmod +x /root/tmp4042645124/log-dump.sh 2022/05/13 09:19:21 process.go:155: Step 'chmod +x /root/tmp4042645124/log-dump.sh' finished in 979.874µs 2022/05/13 09:19:21 aksengine_helpers.go:426: downloading /root/tmp4042645124/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 64 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2022/05/13 09:20:24 process.go:155: Step 'bash -c /root/tmp4042645124/win-ci-logs-collector.sh kubetest-mfxpbga4.westeurope.cloudapp.azure.com /root/tmp4042645124 /root/.ssh/id_rsa' finished in 3.695878ms 2022/05/13 09:20:24 aksengine.go:1141: Deleting resource group: kubetest-mfxpbga4. 2022/05/13 09:26:33 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2022/05/13 09:26:33 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2022/05/13 09:26:34 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 334.555354ms 2022/05/13 09:26:34 main.go:331: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 654e736b240e ... skipping 4 lines ...