Recent runs || View in Spyglass
PR | andyzhangx: chore: switch master branch to use v1.19.0 |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h10m |
Revision | 91694ad3fc47b5482aeb80f7f3b9511c64a061da |
Refs |
1329 |
job-version | v1.25.0-alpha.0.480+9720d130e466f4 |
kubetest-version | |
revision | v1.25.0-alpha.0.480+9720d130e466f4 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
Docker in Docker enabled, initializing... ================================================================================ Starting Docker: docker. Waiting for docker to be ready, sleeping for 1 seconds. ================================================================================ Done setting up docker in docker. ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'}) Please run: $ gcloud auth login to obtain new credentials. ... skipping 242 lines ... --set image.azuredisk.repository=k8sprow.azurecr.io/azuredisk-csi --set image.azuredisk.tag=v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 --set image.azuredisk.pullPolicy=Always --set driver.userAgentSuffix="e2e-test" --set controller.disableAvailabilitySetNodes=true --set controller.replicas=1 --set driver.name=test.csi.azure.com --set controller.name=csi-test-controller --set linux.dsName=csi-test-node --set windows.dsName=csi-test-node-win --set controller.vmssCacheTTLInSeconds=60 \ --set snapshot.enabled=true \ --set cloud=AzurePublicCloud install.go:178: [debug] Original chart version: "" install.go:195: [debug] CHART PATH: /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/charts/latest/azuredisk-csi-driver I0513 11:50:29.014959 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:29.015005 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:29.843607 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:29.843640 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:29.959727 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:29.959769 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.079980 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.080014 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.200248 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.200279 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.318059 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.318096 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.438143 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.438178 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.555217 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.555253 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.672847 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.672887 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.790095 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.790147 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:30.907173 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:30.907310 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.024617 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.024647 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.144646 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.144690 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.262237 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.262270 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.379089 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.379120 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.496083 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.496117 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.613286 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.613328 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.730839 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.730875 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.849322 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.849360 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:31.965274 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:31.965304 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:32.081113 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:32.081153 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:32.197463 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:32.197499 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0513 11:50:32.314293 907 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0513 11:50:32.314442 907 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request client.go:128: [debug] creating 27 resource(s) wait.go:48: [debug] beginning wait for 27 resources with timeout of 15m0s ready.go:304: [debug] DaemonSet is not ready: kube-system/csi-test-node. 0 out of 3 expected pods are ready ready.go:304: [debug] DaemonSet is not ready: kube-system/csi-test-node. 0 out of 3 expected pods are ready ready.go:304: [debug] DaemonSet is not ready: kube-system/csi-test-node. 0 out of 3 expected pods are ready ... skipping 366 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 861 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 200 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 59 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 169 lines ... [1mSTEP[0m: Building a namespace api object, basename topology W0513 11:51:10.317695 977 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 11:51:10.317: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 11:51:10.425: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:194 May 13 11:51:10.852: INFO: Driver didn't provide topology keys -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/framework/framework.go:188 May 13 11:51:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "topology-3941" for this suite. [36m[1mS [SKIPPING] [1.411 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (delayed binding)] topology [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement][0m [90mtest/e2e/storage/testsuites/topology.go:194[0m [36mDriver didn't provide topology keys -- skipping[0m test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m ... skipping 42 lines ... test/e2e/framework/framework.go:188 May 13 11:51:12.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-9769" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":34,"completed":1,"skipped":28,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support existing single file [LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:221[0m ... skipping 17 lines ... May 13 11:51:12.319: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com6j8rf] to have phase Bound May 13 11:51:12.427: INFO: PersistentVolumeClaim test.csi.azure.com6j8rf found but phase is Pending instead of Bound. May 13 11:51:14.537: INFO: PersistentVolumeClaim test.csi.azure.com6j8rf found but phase is Pending instead of Bound. May 13 11:51:16.646: INFO: PersistentVolumeClaim test.csi.azure.com6j8rf found and phase=Bound (4.326451344s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5l8m [1mSTEP[0m: Creating a pod to test subpath May 13 11:51:16.973: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5l8m" in namespace "provisioning-8335" to be "Succeeded or Failed" May 13 11:51:17.081: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 107.929631ms May 13 11:51:19.190: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217040858s May 13 11:51:21.298: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325449697s May 13 11:51:23.407: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434144115s May 13 11:51:25.516: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54303345s May 13 11:51:27.625: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65227615s May 13 11:51:29.734: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.761015289s May 13 11:51:31.843: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.870378967s May 13 11:51:33.953: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.979870117s May 13 11:51:36.061: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Pending", Reason="", readiness=false. Elapsed: 19.088343101s May 13 11:51:38.173: INFO: Pod "pod-subpath-test-dynamicpv-5l8m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.200001782s [1mSTEP[0m: Saw pod success May 13 11:51:38.173: INFO: Pod "pod-subpath-test-dynamicpv-5l8m" satisfied condition "Succeeded or Failed" May 13 11:51:38.282: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000000 pod pod-subpath-test-dynamicpv-5l8m container test-container-subpath-dynamicpv-5l8m: <nil> [1mSTEP[0m: delete the pod May 13 11:51:38.532: INFO: Waiting for pod pod-subpath-test-dynamicpv-5l8m to disappear May 13 11:51:38.640: INFO: Pod pod-subpath-test-dynamicpv-5l8m no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-5l8m May 13 11:51:38.640: INFO: Deleting pod "pod-subpath-test-dynamicpv-5l8m" in namespace "provisioning-8335" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:221[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":36,"completed":1,"skipped":96,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:269[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 ... skipping 2 lines ... [1mSTEP[0m: Building a namespace api object, basename provisioning W0513 11:51:10.374630 942 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 11:51:10.374: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 11:51:10.483: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:269 May 13 11:51:10.913: INFO: Creating resource for dynamic PV May 13 11:51:10.913: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-7556-e2e-sc6rcmn [1mSTEP[0m: creating a claim May 13 11:51:11.022: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 11:51:11.137: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comstnqx] to have phase Bound May 13 11:51:11.245: INFO: PersistentVolumeClaim test.csi.azure.comstnqx found but phase is Pending instead of Bound. May 13 11:51:13.355: INFO: PersistentVolumeClaim test.csi.azure.comstnqx found but phase is Pending instead of Bound. May 13 11:51:15.463: INFO: PersistentVolumeClaim test.csi.azure.comstnqx found and phase=Bound (4.32664153s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-pspd [1mSTEP[0m: Checking for subpath error in container status May 13 11:51:44.023: INFO: Deleting pod "pod-subpath-test-dynamicpv-pspd" in namespace "provisioning-7556" May 13 11:51:44.133: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-pspd" to be fully deleted [1mSTEP[0m: Deleting pod May 13 11:51:46.351: INFO: Deleting pod "pod-subpath-test-dynamicpv-pspd" in namespace "provisioning-7556" [1mSTEP[0m: Deleting pvc May 13 11:51:46.463: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comstnqx" ... skipping 34 lines ... [32m• [SLOW TEST:170.164 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:269[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":33,"completed":1,"skipped":111,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:278[0m ... skipping 20 lines ... May 13 11:51:11.123: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comlbcc9] to have phase Bound May 13 11:51:11.231: INFO: PersistentVolumeClaim test.csi.azure.comlbcc9 found but phase is Pending instead of Bound. May 13 11:51:13.340: INFO: PersistentVolumeClaim test.csi.azure.comlbcc9 found but phase is Pending instead of Bound. May 13 11:51:15.449: INFO: PersistentVolumeClaim test.csi.azure.comlbcc9 found and phase=Bound (4.325427331s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 11:51:15.902: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-h9hss" in namespace "snapshotting-7618" to be "Succeeded or Failed" May 13 11:51:16.010: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 108.012027ms May 13 11:51:18.119: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217322317s May 13 11:51:20.229: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326628848s May 13 11:51:22.338: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435737637s May 13 11:51:24.448: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545769892s May 13 11:51:26.558: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656163976s May 13 11:51:28.674: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 12.772443767s May 13 11:51:30.784: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 14.881746823s May 13 11:51:32.894: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Pending", Reason="", readiness=false. Elapsed: 16.992147487s May 13 11:51:35.005: INFO: Pod "pvc-snapshottable-tester-h9hss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.102716008s [1mSTEP[0m: Saw pod success May 13 11:51:35.005: INFO: Pod "pvc-snapshottable-tester-h9hss" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 11:51:35.113: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comlbcc9] to have phase Bound May 13 11:51:35.222: INFO: PersistentVolumeClaim test.csi.azure.comlbcc9 found and phase=Bound (108.664912ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 11:51:35.583: INFO: Pod pvc-snapshottable-tester-h9hss has the following logs: ... skipping 12 lines ... May 13 11:51:40.793: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 creationTime:2022-05-13T11:51:36Z readyToUse:true restoreSize:5Gi] May 13 11:51:40.794: INFO: snapshotContentName snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 11:51:41.231: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-gkhct" in namespace "snapshotting-7618" to be "Succeeded or Failed" May 13 11:51:41.339: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 108.149256ms May 13 11:51:43.449: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218074719s May 13 11:51:45.558: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32728587s May 13 11:51:47.668: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436617997s May 13 11:51:49.778: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546543803s May 13 11:51:51.887: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656315794s ... skipping 41 lines ... May 13 11:53:20.503: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.271522619s May 13 11:53:22.612: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.38077977s May 13 11:53:24.722: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.490405076s May 13 11:53:26.831: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.599642418s May 13 11:53:28.939: INFO: Pod "pvc-snapshottable-data-tester-gkhct": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m47.708214572s [1mSTEP[0m: Saw pod success May 13 11:53:28.939: INFO: Pod "pvc-snapshottable-data-tester-gkhct" satisfied condition "Succeeded or Failed" May 13 11:53:29.180: INFO: Pod pvc-snapshottable-data-tester-gkhct has the following logs: May 13 11:53:29.180: INFO: Deleting pod "pvc-snapshottable-data-tester-gkhct" in namespace "snapshotting-7618" May 13 11:53:29.297: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-gkhct" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 11:54:13.846: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-s2gs5bqg.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-7618 exec restored-pvc-tester-rkfqv --namespace=snapshotting-7618 -- cat /mnt/test/data' ... skipping 33 lines ... May 13 11:54:40.154: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted May 13 11:54:41.263: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted May 13 11:54:42.372: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted May 13 11:54:43.482: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted May 13 11:54:44.591: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted May 13 11:54:45.700: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted May 13 11:54:46.700: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 11:54:46.809: INFO: Error getting logs for pod restored-pvc-tester-rkfqv: the server could not find the requested resource (get pods restored-pvc-tester-rkfqv) May 13 11:54:46.809: INFO: Deleting pod "restored-pvc-tester-rkfqv" in namespace "snapshotting-7618" May 13 11:54:46.927: INFO: deleting claim "snapshotting-7618"/"pvc-cvxb6" May 13 11:54:47.035: INFO: deleting snapshot "snapshotting-7618"/"snapshot-8xxrq" May 13 11:54:47.145: INFO: deleting snapshot content "snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628" May 13 11:54:47.484: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 to be deleted May 13 11:54:47.592: INFO: volumesnapshotcontents snapcontent-5aff4cca-dadf-4b85-96cd-774750f7f628 has been found and is not deleted ... skipping 27 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":45,"completed":1,"skipped":94,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] provisioning[0m [1mshould provision storage with pvc data source[0m [37mtest/e2e/storage/testsuites/provisioning.go:421[0m ... skipping 103 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90mtest/e2e/storage/testsuites/provisioning.go:421[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":34,"completed":2,"skipped":35,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:278[0m ... skipping 20 lines ... May 13 11:51:11.184: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comqqqmf] to have phase Bound May 13 11:51:11.292: INFO: PersistentVolumeClaim test.csi.azure.comqqqmf found but phase is Pending instead of Bound. May 13 11:51:13.402: INFO: PersistentVolumeClaim test.csi.azure.comqqqmf found but phase is Pending instead of Bound. May 13 11:51:15.516: INFO: PersistentVolumeClaim test.csi.azure.comqqqmf found and phase=Bound (4.33199285s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 11:51:15.951: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-2w6wn" in namespace "snapshotting-6374" to be "Succeeded or Failed" May 13 11:51:16.064: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 112.946839ms May 13 11:51:18.173: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222855061s May 13 11:51:20.283: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332333577s May 13 11:51:22.393: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44253787s May 13 11:51:24.502: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55122121s May 13 11:51:26.611: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660380902s ... skipping 3 lines ... May 13 11:51:35.048: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.097305796s May 13 11:51:37.157: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 21.206497884s May 13 11:51:39.267: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 23.316242366s May 13 11:51:41.376: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 25.425445161s May 13 11:51:43.486: INFO: Pod "pvc-snapshottable-tester-2w6wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.535001683s [1mSTEP[0m: Saw pod success May 13 11:51:43.486: INFO: Pod "pvc-snapshottable-tester-2w6wn" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 11:51:43.595: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comqqqmf] to have phase Bound May 13 11:51:43.703: INFO: PersistentVolumeClaim test.csi.azure.comqqqmf found and phase=Bound (108.139605ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 11:51:44.032: INFO: Pod pvc-snapshottable-tester-2w6wn has the following logs: ... skipping 13 lines ... May 13 11:51:51.455: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-150e95bf-2836-4e18-8ab9-4b84dfc737ab creationTime:2022-05-13T11:51:47Z readyToUse:true restoreSize:5Gi] May 13 11:51:51.455: INFO: snapshotContentName snapcontent-150e95bf-2836-4e18-8ab9-4b84dfc737ab [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 11:51:51.894: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-xnrww" in namespace "snapshotting-6374" to be "Succeeded or Failed" May 13 11:51:52.003: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 108.331333ms May 13 11:51:54.112: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217493343s May 13 11:51:56.222: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327671949s May 13 11:51:58.331: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437058102s May 13 11:52:00.442: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548024942s May 13 11:52:02.552: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657274145s ... skipping 59 lines ... May 13 11:54:09.137: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.242896151s May 13 11:54:11.249: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.354358106s May 13 11:54:13.358: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.463414783s May 13 11:54:15.470: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.575602933s May 13 11:54:17.580: INFO: Pod "pvc-snapshottable-data-tester-xnrww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m25.685222435s [1mSTEP[0m: Saw pod success May 13 11:54:17.580: INFO: Pod "pvc-snapshottable-data-tester-xnrww" satisfied condition "Succeeded or Failed" May 13 11:54:17.834: INFO: Pod pvc-snapshottable-data-tester-xnrww has the following logs: May 13 11:54:17.834: INFO: Deleting pod "pvc-snapshottable-data-tester-xnrww" in namespace "snapshotting-6374" May 13 11:54:17.953: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-xnrww" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 11:54:38.503: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-s2gs5bqg.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-6374 exec restored-pvc-tester-4x9ns --namespace=snapshotting-6374 -- cat /mnt/test/data' ... skipping 47 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":33,"completed":1,"skipped":144,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the single read-only volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:423[0m ... skipping 82 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":36,"completed":2,"skipped":124,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould verify container cannot write to subpath readonly volumes [Slow][0m [37mtest/e2e/storage/testsuites/subpath.go:425[0m ... skipping 19 lines ... May 13 11:51:10.692: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 11:51:10.803: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.commnbsc] to have phase Bound May 13 11:51:10.910: INFO: PersistentVolumeClaim test.csi.azure.commnbsc found but phase is Pending instead of Bound. May 13 11:51:13.019: INFO: PersistentVolumeClaim test.csi.azure.commnbsc found but phase is Pending instead of Bound. May 13 11:51:15.128: INFO: PersistentVolumeClaim test.csi.azure.commnbsc found and phase=Bound (4.324621225s) [1mSTEP[0m: Creating pod to format volume volume-prep-provisioning-4694 May 13 11:51:15.452: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-4694" in namespace "provisioning-4694" to be "Succeeded or Failed" May 13 11:51:15.576: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 123.101624ms May 13 11:51:17.691: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238354169s May 13 11:51:19.801: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348016502s May 13 11:51:21.909: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456814207s May 13 11:51:24.019: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566481962s May 13 11:51:26.129: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 10.676600448s May 13 11:51:28.239: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 12.786535305s May 13 11:51:30.348: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 14.895612817s May 13 11:51:32.458: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 17.005337601s May 13 11:51:34.568: INFO: Pod "volume-prep-provisioning-4694": Phase="Pending", Reason="", readiness=false. Elapsed: 19.11517478s May 13 11:51:36.678: INFO: Pod "volume-prep-provisioning-4694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.225837246s [1mSTEP[0m: Saw pod success May 13 11:51:36.678: INFO: Pod "volume-prep-provisioning-4694" satisfied condition "Succeeded or Failed" May 13 11:51:36.678: INFO: Deleting pod "volume-prep-provisioning-4694" in namespace "provisioning-4694" May 13 11:51:36.790: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-4694" to be fully deleted [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-jgbz [1mSTEP[0m: Checking for subpath error in container status May 13 11:53:33.226: INFO: Deleting pod "pod-subpath-test-dynamicpv-jgbz" in namespace "provisioning-4694" May 13 11:53:33.337: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-jgbz" to be fully deleted [1mSTEP[0m: Deleting pod May 13 11:53:35.555: INFO: Deleting pod "pod-subpath-test-dynamicpv-jgbz" in namespace "provisioning-4694" [1mSTEP[0m: Deleting pvc May 13 11:53:35.664: INFO: Deleting PersistentVolumeClaim "test.csi.azure.commnbsc" ... skipping 37 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should verify container cannot write to subpath readonly volumes [Slow] [90mtest/e2e/storage/testsuites/subpath.go:425[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":30,"completed":1,"skipped":87,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance ... skipping 284 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":26,"completed":1,"skipped":15,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 11:55:53.868: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 287 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data","total":34,"completed":3,"skipped":69,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] volumeIO[0m [1mshould write files of various sizes, verify size, validate content [Slow][0m [37mtest/e2e/storage/testsuites/volume_io.go:149[0m ... skipping 48 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumeIO [90mtest/e2e/storage/framework/testsuite.go:50[0m should write files of various sizes, verify size, validate content [Slow] [90mtest/e2e/storage/testsuites/volume_io.go:149[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]","total":26,"completed":2,"skipped":309,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:278[0m ... skipping 17 lines ... May 13 11:55:28.832: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2qjhd] to have phase Bound May 13 11:55:28.941: INFO: PersistentVolumeClaim test.csi.azure.com2qjhd found but phase is Pending instead of Bound. May 13 11:55:31.049: INFO: PersistentVolumeClaim test.csi.azure.com2qjhd found but phase is Pending instead of Bound. May 13 11:55:33.159: INFO: PersistentVolumeClaim test.csi.azure.com2qjhd found and phase=Bound (4.326950022s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 11:55:33.594: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-gl45w" in namespace "snapshotting-6847" to be "Succeeded or Failed" May 13 11:55:33.703: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 108.25525ms May 13 11:55:35.814: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219404202s May 13 11:55:37.922: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328138871s May 13 11:55:40.031: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436442455s May 13 11:55:42.140: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545469546s May 13 11:55:44.249: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654790688s May 13 11:55:46.358: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.763469085s May 13 11:55:48.469: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.874992943s May 13 11:55:50.581: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 16.986565499s May 13 11:55:52.690: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 19.095341129s May 13 11:55:54.798: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Pending", Reason="", readiness=false. Elapsed: 21.204153815s May 13 11:55:56.908: INFO: Pod "pvc-snapshottable-tester-gl45w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.313785121s [1mSTEP[0m: Saw pod success May 13 11:55:56.908: INFO: Pod "pvc-snapshottable-tester-gl45w" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 11:55:57.017: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2qjhd] to have phase Bound May 13 11:55:57.125: INFO: PersistentVolumeClaim test.csi.azure.com2qjhd found and phase=Bound (107.964935ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 11:55:57.473: INFO: Pod pvc-snapshottable-tester-gl45w has the following logs: ... skipping 33 lines ... May 13 11:56:05.926: INFO: WaitUntil finished successfully after 109.762064ms [1mSTEP[0m: getting the snapshot and snapshot content [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 11:56:06.472: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-5zkv8" in namespace "snapshotting-6847" to be "Succeeded or Failed" May 13 11:56:06.580: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 108.370259ms May 13 11:56:08.689: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217284143s May 13 11:56:10.798: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326386492s May 13 11:56:12.908: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436406763s May 13 11:56:15.017: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545391836s May 13 11:56:17.127: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655411936s ... skipping 2 lines ... May 13 11:56:23.454: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.982234312s May 13 11:56:25.567: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.094676325s May 13 11:56:27.677: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.20476004s May 13 11:56:29.786: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.314334652s May 13 11:56:31.895: INFO: Pod "pvc-snapshottable-data-tester-5zkv8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.42328232s [1mSTEP[0m: Saw pod success May 13 11:56:31.895: INFO: Pod "pvc-snapshottable-data-tester-5zkv8" satisfied condition "Succeeded or Failed" May 13 11:56:32.148: INFO: Pod pvc-snapshottable-data-tester-5zkv8 has the following logs: May 13 11:56:32.148: INFO: Deleting pod "pvc-snapshottable-data-tester-5zkv8" in namespace "snapshotting-6847" May 13 11:56:32.259: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-5zkv8" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 11:56:50.806: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-s2gs5bqg.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-6847 exec restored-pvc-tester-mfq4v --namespace=snapshotting-6847 -- cat /mnt/test/data' ... skipping 33 lines ... May 13 11:57:17.049: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted May 13 11:57:18.159: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted May 13 11:57:19.269: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted May 13 11:57:20.379: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted May 13 11:57:21.488: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted May 13 11:57:22.597: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted May 13 11:57:23.598: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 11:57:23.707: INFO: Error getting logs for pod restored-pvc-tester-mfq4v: the server could not find the requested resource (get pods restored-pvc-tester-mfq4v) May 13 11:57:23.707: INFO: Deleting pod "restored-pvc-tester-mfq4v" in namespace "snapshotting-6847" May 13 11:57:23.816: INFO: deleting claim "snapshotting-6847"/"pvc-w9rpm" May 13 11:57:23.923: INFO: deleting snapshot "snapshotting-6847"/"pre-provisioned-snapshot-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc" May 13 11:57:24.032: INFO: deleting snapshot content "pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc" May 13 11:57:24.365: INFO: Waiting up to 5m0s for volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc to be deleted May 13 11:57:24.475: INFO: volumesnapshotcontents pre-provisioned-snapcontent-98d2ffc8-b834-42e0-a5e1-7cf1611b97cc has been found and is not deleted ... skipping 27 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":36,"completed":3,"skipped":171,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 11:57:42.056: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 215 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":33,"completed":2,"skipped":148,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 97 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":33,"completed":2,"skipped":140,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 11:58:07.261: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 134 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":30,"completed":2,"skipped":217,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 11:58:39.134: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 66 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 105 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":34,"completed":4,"skipped":129,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 201 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":45,"completed":2,"skipped":155,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand test/e2e/storage/framework/testsuite.go:51 May 13 12:00:34.240: INFO: Driver "test.csi.azure.com" does not support volume expansion - skipping ... skipping 64 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 22 lines ... May 13 11:57:43.038: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comp8dh5] to have phase Bound May 13 11:57:43.146: INFO: PersistentVolumeClaim test.csi.azure.comp8dh5 found but phase is Pending instead of Bound. May 13 11:57:45.255: INFO: PersistentVolumeClaim test.csi.azure.comp8dh5 found but phase is Pending instead of Bound. May 13 11:57:47.363: INFO: PersistentVolumeClaim test.csi.azure.comp8dh5 found and phase=Bound (4.325006439s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-ghnr [1mSTEP[0m: Creating a pod to test subpath May 13 11:57:47.690: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ghnr" in namespace "provisioning-513" to be "Succeeded or Failed" May 13 11:57:47.799: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 108.90753ms May 13 11:57:49.908: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217787333s May 13 11:57:52.018: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328032013s May 13 11:57:54.128: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437952895s May 13 11:57:56.239: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548907292s May 13 11:57:58.349: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658608215s ... skipping 9 lines ... May 13 11:58:19.442: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 31.751395587s May 13 11:58:21.550: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 33.860160213s May 13 11:58:23.659: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 35.969032461s May 13 11:58:25.769: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Pending", Reason="", readiness=false. Elapsed: 38.078326388s May 13 11:58:27.878: INFO: Pod "pod-subpath-test-dynamicpv-ghnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.187257836s [1mSTEP[0m: Saw pod success May 13 11:58:27.878: INFO: Pod "pod-subpath-test-dynamicpv-ghnr" satisfied condition "Succeeded or Failed" May 13 11:58:27.986: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000000 pod pod-subpath-test-dynamicpv-ghnr container test-container-subpath-dynamicpv-ghnr: <nil> [1mSTEP[0m: delete the pod May 13 11:58:28.237: INFO: Waiting for pod pod-subpath-test-dynamicpv-ghnr to disappear May 13 11:58:28.345: INFO: Pod pod-subpath-test-dynamicpv-ghnr no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-ghnr May 13 11:58:28.345: INFO: Deleting pod "pod-subpath-test-dynamicpv-ghnr" in namespace "provisioning-513" ... skipping 41 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90mtest/e2e/storage/testsuites/subpath.go:367[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":36,"completed":4,"skipped":238,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 12:00:42.244: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 84 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90mtest/e2e/storage/testsuites/ephemeral.go:254[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":26,"completed":3,"skipped":358,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support restarting containers using file as subpath [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:333[0m ... skipping 71 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support restarting containers using file as subpath [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:333[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":33,"completed":3,"skipped":279,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37mtest/e2e/storage/testsuites/volumemode.go:299[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 11:58:39.187: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] test/e2e/storage/testsuites/volumemode.go:299 May 13 11:58:39.949: INFO: Creating resource for dynamic PV May 13 11:58:39.949: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-3605-e2e-scndg2k [1mSTEP[0m: creating a claim May 13 11:58:40.167: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com8hgvl] to have phase Bound May 13 11:58:40.275: INFO: PersistentVolumeClaim test.csi.azure.com8hgvl found but phase is Pending instead of Bound. May 13 11:58:42.384: INFO: PersistentVolumeClaim test.csi.azure.com8hgvl found but phase is Pending instead of Bound. May 13 11:58:44.492: INFO: PersistentVolumeClaim test.csi.azure.com8hgvl found and phase=Bound (4.32487977s) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail May 13 11:58:45.036: INFO: Deleting pod "pod-22b421a4-44b4-4260-b9dd-8dec3d274992" in namespace "volumemode-3605" May 13 11:58:45.147: INFO: Wait up to 5m0s for pod "pod-22b421a4-44b4-4260-b9dd-8dec3d274992" to be fully deleted [1mSTEP[0m: Deleting pvc May 13 11:58:47.364: INFO: Deleting PersistentVolumeClaim "test.csi.azure.com8hgvl" May 13 11:58:47.474: INFO: Waiting up to 5m0s for PersistentVolume pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 to get deleted May 13 11:58:47.582: INFO: PersistentVolume pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 found and phase=Released (107.758526ms) ... skipping 57 lines ... [32m• [SLOW TEST:270.097 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":30,"completed":3,"skipped":285,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:03:09.308: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 38 lines ... May 13 11:58:08.331: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comjdx6s] to have phase Bound May 13 11:58:08.440: INFO: PersistentVolumeClaim test.csi.azure.comjdx6s found but phase is Pending instead of Bound. May 13 11:58:10.549: INFO: PersistentVolumeClaim test.csi.azure.comjdx6s found but phase is Pending instead of Bound. May 13 11:58:12.658: INFO: PersistentVolumeClaim test.csi.azure.comjdx6s found and phase=Bound (4.326979251s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-dmzb [1mSTEP[0m: Creating a pod to test subpath May 13 11:58:12.987: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dmzb" in namespace "provisioning-1532" to be "Succeeded or Failed" May 13 11:58:13.096: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.075772ms May 13 11:58:15.205: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217977539s May 13 11:58:17.315: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327500125s May 13 11:58:19.424: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436540196s May 13 11:58:21.533: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546204129s May 13 11:58:23.644: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656481072s ... skipping 24 lines ... May 13 11:59:16.392: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.404983543s May 13 11:59:18.508: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.5205144s May 13 11:59:20.617: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.629701639s May 13 11:59:22.726: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.738999174s May 13 11:59:24.836: INFO: Pod "pod-subpath-test-dynamicpv-dmzb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m11.848518772s [1mSTEP[0m: Saw pod success May 13 11:59:24.836: INFO: Pod "pod-subpath-test-dynamicpv-dmzb" satisfied condition "Succeeded or Failed" May 13 11:59:24.946: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000002 pod pod-subpath-test-dynamicpv-dmzb container test-container-subpath-dynamicpv-dmzb: <nil> [1mSTEP[0m: delete the pod May 13 11:59:25.200: INFO: Waiting for pod pod-subpath-test-dynamicpv-dmzb to disappear May 13 11:59:25.308: INFO: Pod pod-subpath-test-dynamicpv-dmzb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-dmzb May 13 11:59:25.309: INFO: Deleting pod "pod-subpath-test-dynamicpv-dmzb" in namespace "provisioning-1532" ... skipping 66 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:382[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":33,"completed":3,"skipped":265,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... May 13 12:00:35.262: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com28czt] to have phase Bound May 13 12:00:35.370: INFO: PersistentVolumeClaim test.csi.azure.com28czt found but phase is Pending instead of Bound. May 13 12:00:37.479: INFO: PersistentVolumeClaim test.csi.azure.com28czt found but phase is Pending instead of Bound. May 13 12:00:39.588: INFO: PersistentVolumeClaim test.csi.azure.com28czt found and phase=Bound (4.325967613s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-c6w2 [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 12:00:39.914: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-c6w2" in namespace "volume-7836" to be "Succeeded or Failed" May 13 12:00:40.022: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 108.227177ms May 13 12:00:42.132: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218630461s May 13 12:00:44.242: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328229968s May 13 12:00:46.352: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438675611s May 13 12:00:48.465: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550706024s May 13 12:00:50.574: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660432627s ... skipping 16 lines ... May 13 12:01:26.437: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 46.52314248s May 13 12:01:28.546: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.63230336s May 13 12:01:30.655: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.741359079s May 13 12:01:32.764: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Pending", Reason="", readiness=false. Elapsed: 52.850247038s May 13 12:01:34.874: INFO: Pod "exec-volume-test-dynamicpv-c6w2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.960282724s [1mSTEP[0m: Saw pod success May 13 12:01:34.874: INFO: Pod "exec-volume-test-dynamicpv-c6w2" satisfied condition "Succeeded or Failed" May 13 12:01:34.982: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000002 pod exec-volume-test-dynamicpv-c6w2 container exec-container-dynamicpv-c6w2: <nil> [1mSTEP[0m: delete the pod May 13 12:01:35.230: INFO: Waiting for pod exec-volume-test-dynamicpv-c6w2 to disappear May 13 12:01:35.339: INFO: Pod exec-volume-test-dynamicpv-c6w2 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-c6w2 May 13 12:01:35.339: INFO: Deleting pod "exec-volume-test-dynamicpv-c6w2" in namespace "volume-7836" ... skipping 39 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":45,"completed":3,"skipped":220,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 93 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":33,"completed":4,"skipped":309,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 May 13 12:04:18.364: INFO: Driver test.csi.azure.com doesn't specify snapshot stress test options -- skipping ... skipping 218 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":34,"completed":5,"skipped":199,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 205 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":36,"completed":5,"skipped":334,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 12:05:02.744: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 130 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":30,"completed":4,"skipped":319,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS][0m [37mtest/e2e/storage/testsuites/multivolume.go:378[0m ... skipping 91 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":26,"completed":4,"skipped":387,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:05:08.165: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 80 lines ... May 13 12:03:47.895: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com8dwqh] to have phase Bound May 13 12:03:48.003: INFO: PersistentVolumeClaim test.csi.azure.com8dwqh found but phase is Pending instead of Bound. May 13 12:03:50.112: INFO: PersistentVolumeClaim test.csi.azure.com8dwqh found but phase is Pending instead of Bound. May 13 12:03:52.221: INFO: PersistentVolumeClaim test.csi.azure.com8dwqh found and phase=Bound (4.325564394s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-pfqj [1mSTEP[0m: Creating a pod to test multi_subpath May 13 12:03:52.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-pfqj" in namespace "provisioning-2735" to be "Succeeded or Failed" May 13 12:03:52.663: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 110.156959ms May 13 12:03:54.772: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219554954s May 13 12:03:56.881: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328180812s May 13 12:03:58.992: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43864937s May 13 12:04:01.101: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548036145s May 13 12:04:03.214: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661501931s ... skipping 21 lines ... May 13 12:04:49.655: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 57.102503474s May 13 12:04:51.764: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 59.211379838s May 13 12:04:53.875: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.321673705s May 13 12:04:55.986: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.432673087s May 13 12:04:58.095: INFO: Pod "pod-subpath-test-dynamicpv-pfqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m5.541999976s [1mSTEP[0m: Saw pod success May 13 12:04:58.095: INFO: Pod "pod-subpath-test-dynamicpv-pfqj" satisfied condition "Succeeded or Failed" May 13 12:04:58.204: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000000 pod pod-subpath-test-dynamicpv-pfqj container test-container-subpath-dynamicpv-pfqj: <nil> [1mSTEP[0m: delete the pod May 13 12:04:58.461: INFO: Waiting for pod pod-subpath-test-dynamicpv-pfqj to disappear May 13 12:04:58.570: INFO: Pod pod-subpath-test-dynamicpv-pfqj no longer exists [1mSTEP[0m: Deleting pod May 13 12:04:58.570: INFO: Deleting pod "pod-subpath-test-dynamicpv-pfqj" in namespace "provisioning-2735" ... skipping 21 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support creating multiple subpath from same volumes [Slow] [90mtest/e2e/storage/testsuites/subpath.go:296[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":33,"completed":4,"skipped":266,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:05:40.332: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 19 lines ... test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:05:40.333: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:194 May 13 12:05:41.093: INFO: Driver didn't provide topology keys -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/framework/framework.go:188 May 13 12:05:41.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "topology-7756" for this suite. [36m[1mS [SKIPPING] [0.984 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (immediate binding)] topology [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement][0m [90mtest/e2e/storage/testsuites/topology.go:194[0m [36mDriver didn't provide topology keys -- skipping[0m test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m ... skipping 88 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 106 lines ... May 13 12:05:09.333: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrglzc] to have phase Bound May 13 12:05:09.442: INFO: PersistentVolumeClaim test.csi.azure.comrglzc found but phase is Pending instead of Bound. May 13 12:05:11.552: INFO: PersistentVolumeClaim test.csi.azure.comrglzc found but phase is Pending instead of Bound. May 13 12:05:13.663: INFO: PersistentVolumeClaim test.csi.azure.comrglzc found and phase=Bound (4.33015806s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-t4d9 [1mSTEP[0m: Creating a pod to test subpath May 13 12:05:13.993: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-t4d9" in namespace "provisioning-7457" to be "Succeeded or Failed" May 13 12:05:14.102: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.216628ms May 13 12:05:16.215: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222662992s May 13 12:05:18.327: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334102139s May 13 12:05:20.438: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444893073s May 13 12:05:22.548: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555570478s May 13 12:05:24.658: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665470409s ... skipping 3 lines ... May 13 12:05:33.100: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.107467321s May 13 12:05:35.210: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.217830718s May 13 12:05:37.320: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.327855408s May 13 12:05:39.431: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.437955288s May 13 12:05:41.541: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.548454067s [1mSTEP[0m: Saw pod success May 13 12:05:41.541: INFO: Pod "pod-subpath-test-dynamicpv-t4d9" satisfied condition "Succeeded or Failed" May 13 12:05:41.651: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000001 pod pod-subpath-test-dynamicpv-t4d9 container test-container-subpath-dynamicpv-t4d9: <nil> [1mSTEP[0m: delete the pod May 13 12:05:41.880: INFO: Waiting for pod pod-subpath-test-dynamicpv-t4d9 to disappear May 13 12:05:41.988: INFO: Pod pod-subpath-test-dynamicpv-t4d9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-t4d9 May 13 12:05:41.988: INFO: Deleting pod "pod-subpath-test-dynamicpv-t4d9" in namespace "provisioning-7457" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-t4d9 [1mSTEP[0m: Creating a pod to test subpath May 13 12:05:42.210: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-t4d9" in namespace "provisioning-7457" to be "Succeeded or Failed" May 13 12:05:42.319: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.202078ms May 13 12:05:44.430: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219993516s May 13 12:05:46.540: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330400469s May 13 12:05:48.651: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440946642s May 13 12:05:50.761: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550997297s May 13 12:05:52.871: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661762389s May 13 12:05:54.983: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.772976094s May 13 12:05:57.092: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.882046989s May 13 12:05:59.201: INFO: Pod "pod-subpath-test-dynamicpv-t4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.991404096s [1mSTEP[0m: Saw pod success May 13 12:05:59.201: INFO: Pod "pod-subpath-test-dynamicpv-t4d9" satisfied condition "Succeeded or Failed" May 13 12:05:59.310: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000001 pod pod-subpath-test-dynamicpv-t4d9 container test-container-subpath-dynamicpv-t4d9: <nil> [1mSTEP[0m: delete the pod May 13 12:05:59.536: INFO: Waiting for pod pod-subpath-test-dynamicpv-t4d9 to disappear May 13 12:05:59.645: INFO: Pod pod-subpath-test-dynamicpv-t4d9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-t4d9 May 13 12:05:59.645: INFO: Deleting pod "pod-subpath-test-dynamicpv-t4d9" in namespace "provisioning-7457" ... skipping 22 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90mtest/e2e/storage/testsuites/subpath.go:397[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":26,"completed":5,"skipped":627,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 12:06:36.513: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 31 lines ... [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 May 13 12:04:19.186: INFO: Creating resource for dynamic PV May 13 12:04:19.186: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass snapshotting-1076-e2e-sc8hlx7 [1mSTEP[0m: [init] starting a pod to use the claim May 13 12:04:19.409: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-b87z8" in namespace "snapshotting-1076" to be "Succeeded or Failed" May 13 12:04:19.519: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 110.099913ms May 13 12:04:21.630: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221430165s May 13 12:04:23.740: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33168976s May 13 12:04:25.852: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443317551s May 13 12:04:27.962: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553403792s May 13 12:04:30.074: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664951609s ... skipping 22 lines ... May 13 12:05:18.626: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 59.217847275s May 13 12:05:20.738: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.329120874s May 13 12:05:22.848: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.439598877s May 13 12:05:24.958: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.549665647s May 13 12:05:27.069: INFO: Pod "pvc-snapshottable-tester-b87z8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m7.659929692s [1mSTEP[0m: Saw pod success May 13 12:05:27.069: INFO: Pod "pvc-snapshottable-tester-b87z8" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim [1mSTEP[0m: creating a SnapshotClass [1mSTEP[0m: creating a dynamic VolumeSnapshot May 13 12:05:27.511: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-fjpkh to become ready May 13 12:05:27.621: INFO: VolumeSnapshot snapshot-fjpkh found but is not ready. May 13 12:05:29.732: INFO: VolumeSnapshot snapshot-fjpkh found but is not ready. ... skipping 49 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works, check deletion (ephemeral) [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":33,"completed":5,"skipped":400,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:280[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:05:02.799: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:280 May 13 12:05:03.558: INFO: Creating resource for dynamic PV May 13 12:05:03.558: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-5827-e2e-scz76gq [1mSTEP[0m: creating a claim May 13 12:05:03.666: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 12:05:03.777: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comddsxx] to have phase Bound May 13 12:05:03.885: INFO: PersistentVolumeClaim test.csi.azure.comddsxx found but phase is Pending instead of Bound. May 13 12:05:05.994: INFO: PersistentVolumeClaim test.csi.azure.comddsxx found but phase is Pending instead of Bound. May 13 12:05:08.103: INFO: PersistentVolumeClaim test.csi.azure.comddsxx found and phase=Bound (4.325945357s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-qmjh [1mSTEP[0m: Checking for subpath error in container status May 13 12:05:38.658: INFO: Deleting pod "pod-subpath-test-dynamicpv-qmjh" in namespace "provisioning-5827" May 13 12:05:38.768: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-qmjh" to be fully deleted [1mSTEP[0m: Deleting pod May 13 12:05:40.984: INFO: Deleting pod "pod-subpath-test-dynamicpv-qmjh" in namespace "provisioning-5827" [1mSTEP[0m: Deleting pvc May 13 12:05:41.093: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comddsxx" ... skipping 28 lines ... [32m• [SLOW TEST:141.250 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:280[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":36,"completed":6,"skipped":388,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral[0m [1mshould create read-only inline ephemeral volume[0m [37mtest/e2e/storage/testsuites/ephemeral.go:175[0m ... skipping 63 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":34,"completed":6,"skipped":203,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/storage/framework/testsuite.go:51 May 13 12:08:06.191: INFO: Driver "test.csi.azure.com" does not support volume expansion - skipping ... skipping 38 lines ... May 13 12:03:50.046: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2mmsm] to have phase Bound May 13 12:03:50.154: INFO: PersistentVolumeClaim test.csi.azure.com2mmsm found but phase is Pending instead of Bound. May 13 12:03:52.263: INFO: PersistentVolumeClaim test.csi.azure.com2mmsm found but phase is Pending instead of Bound. May 13 12:03:54.372: INFO: PersistentVolumeClaim test.csi.azure.com2mmsm found and phase=Bound (4.325218895s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success May 13 12:03:54.815: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-pgj9r" in namespace "snapshotting-7885" to be "Succeeded or Failed" May 13 12:03:54.924: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 108.704959ms May 13 12:03:57.032: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216925467s May 13 12:03:59.140: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325043117s May 13 12:04:01.250: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434902059s May 13 12:04:03.360: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544849819s May 13 12:04:05.469: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653922291s May 13 12:04:07.578: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.762492987s May 13 12:04:09.687: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.872359038s May 13 12:04:11.797: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.981937478s May 13 12:04:13.906: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Pending", Reason="", readiness=false. Elapsed: 19.090689764s May 13 12:04:16.015: INFO: Pod "pvc-snapshottable-tester-pgj9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.200245132s [1mSTEP[0m: Saw pod success May 13 12:04:16.015: INFO: Pod "pvc-snapshottable-tester-pgj9r" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim May 13 12:04:16.123: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2mmsm] to have phase Bound May 13 12:04:16.231: INFO: PersistentVolumeClaim test.csi.azure.com2mmsm found and phase=Bound (107.821241ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod May 13 12:04:16.612: INFO: Pod pvc-snapshottable-tester-pgj9r has the following logs: ... skipping 33 lines ... May 13 12:04:25.034: INFO: WaitUntil finished successfully after 107.970739ms [1mSTEP[0m: getting the snapshot and snapshot content [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC May 13 12:04:25.580: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-98lzk" in namespace "snapshotting-7885" to be "Succeeded or Failed" May 13 12:04:25.688: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 107.706458ms May 13 12:04:27.798: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218290901s May 13 12:04:29.908: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328487598s May 13 12:04:32.017: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437478216s May 13 12:04:34.126: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546022639s May 13 12:04:36.235: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654868148s ... skipping 38 lines ... May 13 12:05:58.502: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.921673699s May 13 12:06:00.611: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m35.030629221s May 13 12:06:02.719: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.139476601s May 13 12:06:04.828: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.248288979s May 13 12:06:06.937: INFO: Pod "pvc-snapshottable-data-tester-98lzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m41.357437588s [1mSTEP[0m: Saw pod success May 13 12:06:06.937: INFO: Pod "pvc-snapshottable-data-tester-98lzk" satisfied condition "Succeeded or Failed" May 13 12:06:07.208: INFO: Pod pvc-snapshottable-data-tester-98lzk has the following logs: May 13 12:06:07.208: INFO: Deleting pod "pvc-snapshottable-data-tester-98lzk" in namespace "snapshotting-7885" May 13 12:06:07.323: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-98lzk" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot May 13 12:07:29.870: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-s2gs5bqg.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-7885 exec restored-pvc-tester-tdh8d --namespace=snapshotting-7885 -- cat /mnt/test/data' ... skipping 47 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":45,"completed":4,"skipped":225,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral[0m [1mshould support two pods which have the same volume definition[0m [37mtest/e2e/storage/testsuites/ephemeral.go:216[0m ... skipping 63 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition","total":30,"completed":5,"skipped":363,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support existing directory[0m [37mtest/e2e/storage/testsuites/subpath.go:207[0m ... skipping 17 lines ... May 13 12:06:37.535: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comqs2wz] to have phase Bound May 13 12:06:37.644: INFO: PersistentVolumeClaim test.csi.azure.comqs2wz found but phase is Pending instead of Bound. May 13 12:06:39.754: INFO: PersistentVolumeClaim test.csi.azure.comqs2wz found but phase is Pending instead of Bound. May 13 12:06:41.864: INFO: PersistentVolumeClaim test.csi.azure.comqs2wz found and phase=Bound (4.328903986s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-xrsz [1mSTEP[0m: Creating a pod to test subpath May 13 12:06:42.198: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xrsz" in namespace "provisioning-8174" to be "Succeeded or Failed" May 13 12:06:42.307: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 109.033416ms May 13 12:06:44.418: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220191933s May 13 12:06:46.530: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331651308s May 13 12:06:48.642: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444066622s May 13 12:06:50.752: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554182486s May 13 12:06:52.863: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664538951s ... skipping 15 lines ... May 13 12:07:26.633: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 44.43474278s May 13 12:07:28.743: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 46.545028725s May 13 12:07:30.853: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 48.655171414s May 13 12:07:32.963: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Pending", Reason="", readiness=false. Elapsed: 50.765009673s May 13 12:07:35.073: INFO: Pod "pod-subpath-test-dynamicpv-xrsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.875460721s [1mSTEP[0m: Saw pod success May 13 12:07:35.074: INFO: Pod "pod-subpath-test-dynamicpv-xrsz" satisfied condition "Succeeded or Failed" May 13 12:07:35.183: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000000 pod pod-subpath-test-dynamicpv-xrsz container test-container-volume-dynamicpv-xrsz: <nil> [1mSTEP[0m: delete the pod May 13 12:07:35.435: INFO: Waiting for pod pod-subpath-test-dynamicpv-xrsz to disappear May 13 12:07:35.545: INFO: Pod pod-subpath-test-dynamicpv-xrsz no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-xrsz May 13 12:07:35.545: INFO: Deleting pod "pod-subpath-test-dynamicpv-xrsz" in namespace "provisioning-8174" ... skipping 41 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90mtest/e2e/storage/testsuites/subpath.go:207[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":26,"completed":6,"skipped":775,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral[0m [1mshould support multiple inline ephemeral volumes[0m [37mtest/e2e/storage/testsuites/ephemeral.go:254[0m ... skipping 257 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":36,"completed":7,"skipped":393,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 91 lines ... May 13 12:08:23.761: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comspjw7] to have phase Bound May 13 12:08:23.870: INFO: PersistentVolumeClaim test.csi.azure.comspjw7 found but phase is Pending instead of Bound. May 13 12:08:25.980: INFO: PersistentVolumeClaim test.csi.azure.comspjw7 found but phase is Pending instead of Bound. May 13 12:08:28.091: INFO: PersistentVolumeClaim test.csi.azure.comspjw7 found and phase=Bound (4.329150166s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-lmrb [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 12:08:28.420: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-lmrb" in namespace "volume-4120" to be "Succeeded or Failed" May 13 12:08:28.529: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.064785ms May 13 12:08:30.641: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220306623s May 13 12:08:32.752: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331333356s May 13 12:08:34.862: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442064688s May 13 12:08:36.973: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552490735s May 13 12:08:39.083: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662553059s ... skipping 9 lines ... May 13 12:09:00.189: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.769125482s May 13 12:09:02.299: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.879127759s May 13 12:09:04.410: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 35.990059808s May 13 12:09:06.520: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Pending", Reason="", readiness=false. Elapsed: 38.099515646s May 13 12:09:08.629: INFO: Pod "exec-volume-test-dynamicpv-lmrb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.209184194s [1mSTEP[0m: Saw pod success May 13 12:09:08.630: INFO: Pod "exec-volume-test-dynamicpv-lmrb" satisfied condition "Succeeded or Failed" May 13 12:09:08.739: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000002 pod exec-volume-test-dynamicpv-lmrb container exec-container-dynamicpv-lmrb: <nil> [1mSTEP[0m: delete the pod May 13 12:09:08.990: INFO: Waiting for pod exec-volume-test-dynamicpv-lmrb to disappear May 13 12:09:09.099: INFO: Pod exec-volume-test-dynamicpv-lmrb no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-lmrb May 13 12:09:09.099: INFO: Deleting pod "exec-volume-test-dynamicpv-lmrb" in namespace "volume-4120" ... skipping 39 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext3)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":30,"completed":6,"skipped":401,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology ... skipping 366 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":33,"completed":5,"skipped":530,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":45,"completed":5,"skipped":238,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:10:48.102: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json ... skipping 10 lines ... May 13 12:10:49.081: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comnl29z] to have phase Bound May 13 12:10:49.189: INFO: PersistentVolumeClaim test.csi.azure.comnl29z found but phase is Pending instead of Bound. May 13 12:10:51.299: INFO: PersistentVolumeClaim test.csi.azure.comnl29z found but phase is Pending instead of Bound. May 13 12:10:53.408: INFO: PersistentVolumeClaim test.csi.azure.comnl29z found and phase=Bound (4.326180963s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-6rpx [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 12:10:53.735: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-6rpx" in namespace "volume-7826" to be "Succeeded or Failed" May 13 12:10:53.842: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 107.903934ms May 13 12:10:55.953: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218103941s May 13 12:10:58.062: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32745195s May 13 12:11:00.172: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437032732s May 13 12:11:02.281: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545919921s May 13 12:11:04.390: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655174742s May 13 12:11:06.503: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.768414014s May 13 12:11:08.613: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.878414001s May 13 12:11:10.723: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.988712371s May 13 12:11:12.837: INFO: Pod "exec-volume-test-dynamicpv-6rpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.101929026s [1mSTEP[0m: Saw pod success May 13 12:11:12.837: INFO: Pod "exec-volume-test-dynamicpv-6rpx" satisfied condition "Succeeded or Failed" May 13 12:11:12.944: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000001 pod exec-volume-test-dynamicpv-6rpx container exec-container-dynamicpv-6rpx: <nil> [1mSTEP[0m: delete the pod May 13 12:11:13.194: INFO: Waiting for pod exec-volume-test-dynamicpv-6rpx to disappear May 13 12:11:13.303: INFO: Pod exec-volume-test-dynamicpv-6rpx no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-6rpx May 13 12:11:13.303: INFO: Deleting pod "exec-volume-test-dynamicpv-6rpx" in namespace "volume-7826" ... skipping 166 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":34,"completed":7,"skipped":314,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand test/e2e/storage/framework/testsuite.go:51 May 13 12:13:26.583: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 183 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":6,"skipped":403,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... May 13 12:11:17.625: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comdjc8k] to have phase Bound May 13 12:11:17.733: INFO: PersistentVolumeClaim test.csi.azure.comdjc8k found but phase is Pending instead of Bound. May 13 12:11:19.843: INFO: PersistentVolumeClaim test.csi.azure.comdjc8k found but phase is Pending instead of Bound. May 13 12:11:21.952: INFO: PersistentVolumeClaim test.csi.azure.comdjc8k found and phase=Bound (4.326747395s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-9q5q [1mSTEP[0m: Creating a pod to test exec-volume-test May 13 12:11:22.278: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-9q5q" in namespace "volume-2740" to be "Succeeded or Failed" May 13 12:11:22.387: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 108.522587ms May 13 12:11:24.496: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218065879s May 13 12:11:26.606: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327799228s May 13 12:11:28.714: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436381043s May 13 12:11:30.825: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546824287s May 13 12:11:32.934: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656305852s ... skipping 20 lines ... May 13 12:12:17.238: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 54.959448101s May 13 12:12:19.347: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 57.068539219s May 13 12:12:21.456: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 59.177437729s May 13 12:12:23.567: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.289009603s May 13 12:12:25.676: INFO: Pod "exec-volume-test-dynamicpv-9q5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.397906197s [1mSTEP[0m: Saw pod success May 13 12:12:25.676: INFO: Pod "exec-volume-test-dynamicpv-9q5q" satisfied condition "Succeeded or Failed" May 13 12:12:25.784: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000000 pod exec-volume-test-dynamicpv-9q5q container exec-container-dynamicpv-9q5q: <nil> [1mSTEP[0m: delete the pod May 13 12:12:26.038: INFO: Waiting for pod exec-volume-test-dynamicpv-9q5q to disappear May 13 12:12:26.146: INFO: Pod exec-volume-test-dynamicpv-9q5q no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-9q5q May 13 12:12:26.146: INFO: Deleting pod "exec-volume-test-dynamicpv-9q5q" in namespace "volume-2740" ... skipping 39 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume","total":36,"completed":8,"skipped":462,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 12:14:39.934: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 118 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":26,"completed":7,"skipped":825,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:209[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":45,"completed":6,"skipped":238,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:12:25.726: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json ... skipping 188 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":45,"completed":7,"skipped":238,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning test/e2e/storage/framework/testsuite.go:51 May 13 12:16:32.990: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 339 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":30,"completed":7,"skipped":783,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 12:17:27.758: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 249 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":33,"completed":6,"skipped":570,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow][0m [1mshould concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS][0m [37mtest/e2e/storage/testsuites/multivolume.go:378[0m ... skipping 86 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":7,"skipped":486,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow][0m [1mshould concurrently access the single read-only volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:423[0m ... skipping 81 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":36,"completed":9,"skipped":535,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 12:18:34.711: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 131 lines ... test/e2e/storage/external/external.go:262 [90m------------------------------[0m [36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:242[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:17:27.811: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:242 May 13 12:17:28.573: INFO: Creating resource for dynamic PV May 13 12:17:28.573: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-6473-e2e-scd4hpd [1mSTEP[0m: creating a claim May 13 12:17:28.682: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 12:17:28.792: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrx4ch] to have phase Bound May 13 12:17:28.900: INFO: PersistentVolumeClaim test.csi.azure.comrx4ch found but phase is Pending instead of Bound. May 13 12:17:31.011: INFO: PersistentVolumeClaim test.csi.azure.comrx4ch found but phase is Pending instead of Bound. May 13 12:17:33.120: INFO: PersistentVolumeClaim test.csi.azure.comrx4ch found and phase=Bound (4.32798759s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-kljj [1mSTEP[0m: Checking for subpath error in container status May 13 12:18:35.668: INFO: Deleting pod "pod-subpath-test-dynamicpv-kljj" in namespace "provisioning-6473" May 13 12:18:35.778: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-kljj" to be fully deleted [1mSTEP[0m: Deleting pod May 13 12:18:37.998: INFO: Deleting pod "pod-subpath-test-dynamicpv-kljj" in namespace "provisioning-6473" [1mSTEP[0m: Deleting pvc May 13 12:18:38.107: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comrx4ch" ... skipping 13 lines ... [32m• [SLOW TEST:96.618 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath directory is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:242[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":30,"completed":8,"skipped":848,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] provisioning[0m [1mshould provision storage with pvc data source in parallel [Slow][0m [37mtest/e2e/storage/testsuites/provisioning.go:459[0m ... skipping 340 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source in parallel [Slow] [90mtest/e2e/storage/testsuites/provisioning.go:459[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]","total":34,"completed":8,"skipped":479,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:19:22.349: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 223 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":36,"completed":10,"skipped":705,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] ... skipping 151 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90mtest/e2e/storage/testsuites/provisioning.go:421[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":33,"completed":8,"skipped":487,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:22:27.820: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 27 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 103 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":34,"completed":9,"skipped":717,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 12:23:42.215: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 321 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":30,"completed":9,"skipped":855,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 May 13 12:23:57.725: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 159 lines ... May 13 12:23:51.543: INFO: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 found and phase=Released (4m36.08977374s) May 13 12:23:56.653: INFO: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 found and phase=Released (4m41.200193081s) May 13 12:24:01.765: INFO: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 found and phase=Released (4m46.311921066s) May 13 12:24:06.877: INFO: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 found and phase=Released (4m51.423625404s) May 13 12:24:11.987: INFO: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 found and phase=Released (4m56.534206656s) [1mSTEP[0m: Deleting sc May 13 12:24:17.101: FAIL: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { msg: "persistent Volume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 not deleted by dynamic provisioner: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 still exists within 5m0s", err: { s: "PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 still exists within 5m0s", ... skipping 28 lines ... May 13 12:24:17.321: INFO: At 2022-05-13 12:16:57 +0000 UTC - event for external-injector: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596" May 13 12:24:17.321: INFO: At 2022-05-13 12:16:58 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-19417709-vmss000000} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" May 13 12:24:17.321: INFO: At 2022-05-13 12:16:59 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-19417709-vmss000000} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" in 306.349803ms May 13 12:24:17.321: INFO: At 2022-05-13 12:16:59 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-19417709-vmss000000} Started: Started container external-injector May 13 12:24:17.321: INFO: At 2022-05-13 12:16:59 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-19417709-vmss000000} Created: Created container external-injector May 13 12:24:17.321: INFO: At 2022-05-13 12:17:04 +0000 UTC - event for external-injector: {kubelet k8s-agentpool1-19417709-vmss000000} Killing: Stopping container external-injector May 13 12:24:17.321: INFO: At 2022-05-13 12:17:07 +0000 UTC - event for pod-d1766b3e-0561-4e44-b2ab-2b252545415a: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596" Volume is already exclusively attached to one node and can't be attached to another May 13 12:24:17.321: INFO: At 2022-05-13 12:17:07 +0000 UTC - event for pod-d1766b3e-0561-4e44-b2ab-2b252545415a: {default-scheduler } Scheduled: Successfully assigned multivolume-3798/pod-d1766b3e-0561-4e44-b2ab-2b252545415a to k8s-agentpool1-19417709-vmss000002 May 13 12:24:17.321: INFO: At 2022-05-13 12:17:07 +0000 UTC - event for test.csi.azure.combc6qf-cloned: {test.csi.azure.com_k8s-agentpool1-19417709-vmss000002_261b8626-1c2f-42aa-9250-c0e318caeb06 } Provisioning: External provisioner is provisioning volume for claim "multivolume-3798/test.csi.azure.combc6qf-cloned" May 13 12:24:17.321: INFO: At 2022-05-13 12:17:07 +0000 UTC - event for test.csi.azure.combc6qf-cloned: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "test.csi.azure.com" or manually created by system administrator May 13 12:24:17.321: INFO: At 2022-05-13 12:17:09 +0000 UTC - event for test.csi.azure.combc6qf-cloned: {test.csi.azure.com_k8s-agentpool1-19417709-vmss000002_261b8626-1c2f-42aa-9250-c0e318caeb06 } ProvisioningSucceeded: Successfully provisioned volume pvc-555cd6fc-998a-432b-b8ee-ba5bedbff897 May 13 12:24:17.321: INFO: At 2022-05-13 12:18:28 +0000 UTC - event for pod-d1766b3e-0561-4e44-b2ab-2b252545415a: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596" May 13 12:24:17.321: INFO: At 2022-05-13 12:18:31 +0000 UTC - event for pod-d1766b3e-0561-4e44-b2ab-2b252545415a: {kubelet k8s-agentpool1-19417709-vmss000002} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5" ... skipping 129 lines ... [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m [91m[1mshould concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [Measurement][0m [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [91mMay 13 12:24:17.102: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { msg: "persistent Volume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 not deleted by dynamic provisioner: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 still exists within 5m0s", err: { s: "PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 still exists within 5m0s", ... skipping 3 lines ... ] persistent Volume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 not deleted by dynamic provisioner: PersistentVolume pvc-4b7a1f96-6ef5-4a1f-a43f-31628ef1f596 still exists within 5m0s occurred[0m test/e2e/storage/testsuites/multivolume.go:129 [90m------------------------------[0m {"msg":"FAILED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":26,"completed":7,"skipped":835,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:24:20.826: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 146 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":7,"skipped":574,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:248[0m ... skipping 198 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":36,"completed":11,"skipped":799,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 104 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":26,"completed":8,"skipped":899,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the single volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:298[0m ... skipping 148 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":33,"completed":8,"skipped":575,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] volumeLimits[0m [1mshould support volume limits [Serial][0m [37mtest/e2e/storage/testsuites/volumelimits.go:127[0m ... skipping 125 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits [90mtest/e2e/storage/framework/testsuite.go:50[0m should support volume limits [Serial] [90mtest/e2e/storage/testsuites/volumelimits.go:127[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]","total":45,"completed":8,"skipped":417,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works, check deletion (ephemeral)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:177[0m ... skipping 10 lines ... [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 May 13 12:23:44.032: INFO: Creating resource for dynamic PV May 13 12:23:44.032: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass snapshotting-3992-e2e-sc46lsj [1mSTEP[0m: [init] starting a pod to use the claim May 13 12:23:44.252: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-tcs7z" in namespace "snapshotting-3992" to be "Succeeded or Failed" May 13 12:23:44.361: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 108.979369ms May 13 12:23:46.469: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217657901s May 13 12:23:48.579: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327090008s May 13 12:23:50.688: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4368272s May 13 12:23:52.798: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545931327s May 13 12:23:54.906: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654832094s ... skipping 6 lines ... May 13 12:24:09.678: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 25.42627009s May 13 12:24:11.787: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 27.535637126s May 13 12:24:13.896: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 29.644668877s May 13 12:24:16.006: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Pending", Reason="", readiness=false. Elapsed: 31.75483659s May 13 12:24:18.115: INFO: Pod "pvc-snapshottable-tester-tcs7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.863325488s [1mSTEP[0m: Saw pod success May 13 12:24:18.115: INFO: Pod "pvc-snapshottable-tester-tcs7z" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim [1mSTEP[0m: creating a SnapshotClass [1mSTEP[0m: creating a dynamic VolumeSnapshot May 13 12:24:18.555: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-dlggk to become ready May 13 12:24:18.663: INFO: VolumeSnapshot snapshot-dlggk found but is not ready. May 13 12:24:20.772: INFO: VolumeSnapshot snapshot-dlggk found but is not ready. ... skipping 40 lines ... May 13 12:25:52.728: INFO: volumesnapshotcontents snapcontent-d389d613-79e4-45fa-9f21-2fe112c79ea8 has been found and is not deleted May 13 12:25:53.837: INFO: volumesnapshotcontents snapcontent-d389d613-79e4-45fa-9f21-2fe112c79ea8 has been found and is not deleted May 13 12:25:54.946: INFO: volumesnapshotcontents snapcontent-d389d613-79e4-45fa-9f21-2fe112c79ea8 has been found and is not deleted May 13 12:25:56.055: INFO: volumesnapshotcontents snapcontent-d389d613-79e4-45fa-9f21-2fe112c79ea8 has been found and is not deleted May 13 12:25:57.164: INFO: volumesnapshotcontents snapcontent-d389d613-79e4-45fa-9f21-2fe112c79ea8 has been found and is not deleted May 13 12:25:58.272: INFO: volumesnapshotcontents snapcontent-d389d613-79e4-45fa-9f21-2fe112c79ea8 has been found and is not deleted May 13 12:25:59.273: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 May 13 12:25:59.412: INFO: Pod restored-pvc-tester-ckmtc has the following logs: May 13 12:25:59.412: INFO: Deleting pod "restored-pvc-tester-ckmtc" in namespace "snapshotting-3992" May 13 12:25:59.522: INFO: Wait up to 5m0s for pod "restored-pvc-tester-ckmtc" to be fully deleted May 13 12:26:31.740: INFO: deleting snapshot "snapshotting-3992"/"snapshot-dlggk" ... skipping 26 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works, check deletion (ephemeral) [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":34,"completed":10,"skipped":842,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral ... skipping 94 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":33,"completed":9,"skipped":661,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:26:40.812: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 169 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":36,"completed":12,"skipped":891,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral test/e2e/storage/framework/testsuite.go:51 May 13 12:27:10.208: INFO: Driver "test.csi.azure.com" does not support volume type "CSIInlineVolume" - skipping ... skipping 88 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 32 lines ... May 13 12:26:25.979: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comcl985] to have phase Bound May 13 12:26:26.087: INFO: PersistentVolumeClaim test.csi.azure.comcl985 found but phase is Pending instead of Bound. May 13 12:26:28.197: INFO: PersistentVolumeClaim test.csi.azure.comcl985 found but phase is Pending instead of Bound. May 13 12:26:30.308: INFO: PersistentVolumeClaim test.csi.azure.comcl985 found and phase=Bound (4.328749013s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-8299 [1mSTEP[0m: Creating a pod to test subpath May 13 12:26:30.633: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-8299" in namespace "provisioning-3876" to be "Succeeded or Failed" May 13 12:26:30.743: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 110.233512ms May 13 12:26:32.852: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219481525s May 13 12:26:34.961: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328043894s May 13 12:26:37.071: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437611109s May 13 12:26:39.179: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545963754s May 13 12:26:41.287: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654459495s May 13 12:26:43.396: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 12.763605068s May 13 12:26:45.505: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 14.872604529s May 13 12:26:47.615: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 16.9817101s May 13 12:26:49.726: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 19.093366958s May 13 12:26:51.835: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Pending", Reason="", readiness=false. Elapsed: 21.202338251s May 13 12:26:53.946: INFO: Pod "pod-subpath-test-dynamicpv-8299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.31278499s [1mSTEP[0m: Saw pod success May 13 12:26:53.946: INFO: Pod "pod-subpath-test-dynamicpv-8299" satisfied condition "Succeeded or Failed" May 13 12:26:54.054: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000001 pod pod-subpath-test-dynamicpv-8299 container test-container-volume-dynamicpv-8299: <nil> [1mSTEP[0m: delete the pod May 13 12:26:54.282: INFO: Waiting for pod pod-subpath-test-dynamicpv-8299 to disappear May 13 12:26:54.390: INFO: Pod pod-subpath-test-dynamicpv-8299 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-8299 May 13 12:26:54.390: INFO: Deleting pod "pod-subpath-test-dynamicpv-8299" in namespace "provisioning-3876" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90mtest/e2e/storage/testsuites/subpath.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":45,"completed":9,"skipped":442,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 12:28:06.952: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 13 lines ... test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath file is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:258[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:26:41.728: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:258 May 13 12:26:42.481: INFO: Creating resource for dynamic PV May 13 12:26:42.481: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-3127-e2e-scqrclk [1mSTEP[0m: creating a claim May 13 12:26:42.589: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil May 13 12:26:42.700: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comsrzr6] to have phase Bound May 13 12:26:42.808: INFO: PersistentVolumeClaim test.csi.azure.comsrzr6 found but phase is Pending instead of Bound. May 13 12:26:44.917: INFO: PersistentVolumeClaim test.csi.azure.comsrzr6 found but phase is Pending instead of Bound. May 13 12:26:47.026: INFO: PersistentVolumeClaim test.csi.azure.comsrzr6 found and phase=Bound (4.325904672s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-b7x8 [1mSTEP[0m: Checking for subpath error in container status May 13 12:27:05.570: INFO: Deleting pod "pod-subpath-test-dynamicpv-b7x8" in namespace "provisioning-3127" May 13 12:27:05.681: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-b7x8" to be fully deleted [1mSTEP[0m: Deleting pod May 13 12:27:07.899: INFO: Deleting pod "pod-subpath-test-dynamicpv-b7x8" in namespace "provisioning-3127" [1mSTEP[0m: Deleting pvc May 13 12:27:08.007: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comsrzr6" ... skipping 22 lines ... [32m• [SLOW TEST:98.593 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath file is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:258[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":34,"completed":11,"skipped":913,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:28:20.325: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 221 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":33,"completed":9,"skipped":642,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral[0m [1mshould create read/write inline ephemeral volume[0m [37mtest/e2e/storage/testsuites/ephemeral.go:196[0m ... skipping 46 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":34,"completed":12,"skipped":985,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] provisioning[0m [1mshould provision storage with pvc data source in parallel [Slow][0m [37mtest/e2e/storage/testsuites/provisioning.go:459[0m ... skipping 334 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source in parallel [Slow] [90mtest/e2e/storage/testsuites/provisioning.go:459[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]","total":26,"completed":9,"skipped":902,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity test/e2e/storage/framework/testsuite.go:51 May 13 12:30:31.770: INFO: Driver test.csi.azure.com doesn't publish storage capacity -- skipping ... skipping 217 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":45,"completed":10,"skipped":486,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits[0m [1mshould verify that all csinodes have volume limits[0m [37mtest/e2e/storage/testsuites/volumelimits.go:249[0m ... skipping 16 lines ... test/e2e/framework/framework.go:188 May 13 12:31:03.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-5672" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":45,"completed":11,"skipped":492,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 May 13 12:31:03.987: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 38 lines ... May 13 12:29:58.199: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comtznb4] to have phase Bound May 13 12:29:58.308: INFO: PersistentVolumeClaim test.csi.azure.comtznb4 found but phase is Pending instead of Bound. May 13 12:30:00.418: INFO: PersistentVolumeClaim test.csi.azure.comtznb4 found but phase is Pending instead of Bound. May 13 12:30:02.527: INFO: PersistentVolumeClaim test.csi.azure.comtznb4 found and phase=Bound (4.327768776s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-48bm [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 13 12:30:02.854: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-48bm" in namespace "provisioning-6825" to be "Succeeded or Failed" May 13 12:30:02.963: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Pending", Reason="", readiness=false. Elapsed: 108.714811ms May 13 12:30:05.073: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218762404s May 13 12:30:07.183: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328555147s May 13 12:30:09.294: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43968111s May 13 12:30:11.404: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549171381s May 13 12:30:13.514: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659612591s ... skipping 11 lines ... May 13 12:30:38.836: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Running", Reason="", readiness=true. Elapsed: 35.982025548s May 13 12:30:40.946: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Running", Reason="", readiness=true. Elapsed: 38.091637304s May 13 12:30:43.057: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Running", Reason="", readiness=true. Elapsed: 40.202148436s May 13 12:30:45.166: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Running", Reason="", readiness=false. Elapsed: 42.311815667s May 13 12:30:47.276: INFO: Pod "pod-subpath-test-dynamicpv-48bm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.421542971s [1mSTEP[0m: Saw pod success May 13 12:30:47.276: INFO: Pod "pod-subpath-test-dynamicpv-48bm" satisfied condition "Succeeded or Failed" May 13 12:30:47.385: INFO: Trying to get logs from node k8s-agentpool1-19417709-vmss000001 pod pod-subpath-test-dynamicpv-48bm container test-container-subpath-dynamicpv-48bm: <nil> [1mSTEP[0m: delete the pod May 13 12:30:47.638: INFO: Waiting for pod pod-subpath-test-dynamicpv-48bm to disappear May 13 12:30:47.746: INFO: Pod pod-subpath-test-dynamicpv-48bm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-48bm May 13 12:30:47.746: INFO: Deleting pod "pod-subpath-test-dynamicpv-48bm" in namespace "provisioning-6825" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:232[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":33,"completed":10,"skipped":656,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral[0m [1mshould support two pods which have the same volume definition[0m [37mtest/e2e/storage/testsuites/ephemeral.go:216[0m ... skipping 68 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition","total":33,"completed":10,"skipped":761,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO test/e2e/storage/framework/testsuite.go:51 May 13 12:31:36.756: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 98 lines ... test/e2e/storage/external/external.go:262 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37mtest/e2e/storage/testsuites/volumemode.go:299[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client May 13 12:31:38.851: INFO: >>> kubeConfig: /root/tmp1431985631/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] test/e2e/storage/testsuites/volumemode.go:299 May 13 12:31:39.615: INFO: Creating resource for dynamic PV May 13 12:31:39.615: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-1665-e2e-sck6hl2 [1mSTEP[0m: creating a claim May 13 12:31:39.838: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comkjqrq] to have phase Bound May 13 12:31:39.947: INFO: PersistentVolumeClaim test.csi.azure.comkjqrq found but phase is Pending instead of Bound. May 13 12:31:42.058: INFO: PersistentVolumeClaim test.csi.azure.comkjqrq found but phase is Pending instead of Bound. May 13 12:31:44.167: INFO: PersistentVolumeClaim test.csi.azure.comkjqrq found and phase=Bound (4.329634314s) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail May 13 12:31:46.826: INFO: Deleting pod "pod-7510ffbc-f3f0-46fe-acf9-cea61015460d" in namespace "volumemode-1665" May 13 12:31:46.936: INFO: Wait up to 5m0s for pod "pod-7510ffbc-f3f0-46fe-acf9-cea61015460d" to be fully deleted [1mSTEP[0m: Deleting pvc May 13 12:31:49.155: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comkjqrq" May 13 12:31:49.265: INFO: Waiting up to 5m0s for PersistentVolume pvc-2dabd955-679d-4426-9fdc-96674124f90e to get deleted May 13 12:31:49.374: INFO: PersistentVolume pvc-2dabd955-679d-4426-9fdc-96674124f90e found and phase=Released (109.424976ms) ... skipping 14 lines ... [32m• [SLOW TEST:51.957 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":33,"completed":11,"skipped":913,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:138[0m ... skipping 189 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":26,"completed":10,"skipped":995,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext3)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 126 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext3)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should store data","total":34,"completed":13,"skipped":1017,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m May 13 12:33:52.826: INFO: Running AfterSuite actions on all nodes May 13 12:33:52.826: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 12:33:52.826: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 81 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support restarting containers using directory as subpath [Slow] [90mtest/e2e/storage/testsuites/subpath.go:322[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":33,"completed":11,"skipped":684,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 137 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":26,"completed":11,"skipped":999,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m May 13 12:36:09.416: INFO: Running AfterSuite actions on all nodes May 13 12:36:09.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 12:36:09.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 109 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":12,"skipped":696,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:37:24.874: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 117 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits [90mtest/e2e/storage/framework/testsuite.go:50[0m should support volume limits [Serial] [90mtest/e2e/storage/testsuites/volumelimits.go:127[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]","total":45,"completed":12,"skipped":517,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:37:47.750: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 172 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":33,"completed":12,"skipped":950,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould be able to unmount after the subpath directory is deleted [LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:447[0m ... skipping 51 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should be able to unmount after the subpath directory is deleted [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:447[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":45,"completed":13,"skipped":536,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 104 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 161 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":33,"completed":13,"skipped":739,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-stress test/e2e/storage/framework/testsuite.go:51 May 13 12:39:28.435: INFO: Driver test.csi.azure.com doesn't specify stress test options -- skipping ... skipping 189 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":33,"completed":13,"skipped":951,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 113 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":45,"completed":14,"skipped":752,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 56 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 101 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":33,"completed":14,"skipped":988,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould concurrently access the single volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:298[0m ... skipping 153 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":33,"completed":14,"skipped":956,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m May 13 12:42:01.691: INFO: Running AfterSuite actions on all nodes May 13 12:42:01.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 12:42:01.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 73 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":45,"completed":15,"skipped":939,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 May 13 12:42:13.595: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 170 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":45,"completed":16,"skipped":981,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 May 13 12:43:45.580: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 122 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":15,"skipped":1017,"failed":0} [36mS[0m [90m------------------------------[0m May 13 12:44:10.296: INFO: Running AfterSuite actions on all nodes May 13 12:44:10.296: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 May 13 12:44:10.296: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 15 lines ... May 13 12:44:10.342: INFO: Running AfterSuite actions on node 1 [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mExternal Storage [Driver: test.csi.azure.com] [0m[0m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [0m[91m[1m[Measurement] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [0m [37mtest/e2e/storage/testsuites/multivolume.go:129[0m [1m[91mRan 91 of 7227 Specs in 3182.443 seconds[0m [1m[91mFAIL![0m -- [32m[1m90 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m7136 Skipped[0m Ginkgo ran 1 suite in 53m5.968516361s Test Suite Failed + print_logs + sed -i s/disk.csi.azure.com/test.csi.azure.com/g deploy/example/storageclass-azuredisk-csi.yaml + bash ./hack/verify-examples.sh linux azurepubliccloud ephemeral test begin to create deployment examples ... storageclass.storage.k8s.io/managed-csi created Applying config "deploy/example/deployment.yaml" ... skipping 80 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 11:50:59.667310 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 11:50:59.667651 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 11:50:59.685766 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 11:50:59.685797 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 11:50:59.685808 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 11:50:59.685852 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 11:50:59.687567 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 11:50:59.687599 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 11:50:59.687605 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 11:50:59.687669 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for acb72a6f-de77-4cc8-9b84-00401d3cb401. Invalid resource Id format I0513 11:50:59.687715 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 11:50:59.687773 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:59.687887 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:59.687922 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:59.687928 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:59.687944 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 156 lines ... I0513 11:51:16.112081 1 controllerserver.go:174] begin to create azure disk(pvc-9a440515-9872-450e-88f4-b2e7fab5c603) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 11:51:16.112108 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-9a440515-9872-450e-88f4-b2e7fab5c603 StorageAccountType:StandardSSD_LRS Size:5 I0513 11:51:16.347428 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 StorageAccountType:StandardSSD_LRS Size:5 I0513 11:51:16.347473 1 controllerserver.go:258] create azure disk(pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 kubernetes.io-created-for-pvc-name:pvc-b2282 kubernetes.io-created-for-pvc-namespace:provisioning-2043]) successfully I0513 11:51:16.347512 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.415231072 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378" result_code="succeeded" I0513 11:51:16.347549 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378","csi.storage.k8s.io/pvc/name":"pvc-b2282","csi.storage.k8s.io/pvc/namespace":"provisioning-2043","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378"}} I0513 11:51:16.588614 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:51:16.601801 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-29acf24c-213b-42e0-a55a-4794454e0531 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:51:16.973821 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:51:16.973852 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6j8rf","csi.storage.k8s.io/pvc/namespace":"provisioning-8335","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae"} I0513 11:51:17.010780 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae to node k8s-agentpool1-19417709-vmss000000. I0513 11:51:17.010847 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae to node k8s-agentpool1-19417709-vmss000000 I0513 11:51:17.010877 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae false 0})] I0513 11:51:17.010902 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae false 0})]) I0513 11:51:17.202377 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:51:17.676609 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:51:17.676639 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378","csi.storage.k8s.io/pvc/name":"pvc-b2282","csi.storage.k8s.io/pvc/namespace":"provisioning-2043","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378"} I0513 11:51:17.715101 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 to node k8s-agentpool1-19417709-vmss000000. I0513 11:51:17.715162 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000000, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:51:17.766334 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 to node k8s-agentpool1-19417709-vmss000000 I0513 11:51:18.637372 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-9a440515-9872-450e-88f4-b2e7fab5c603 StorageAccountType:StandardSSD_LRS Size:5 ... skipping 24 lines ... I0513 11:51:26.713294 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-29acf24c-213b-42e0-a55a-4794454e0531","csi.storage.k8s.io/pvc/name":"test.csi.azure.comlbcc9","csi.storage.k8s.io/pvc/namespace":"snapshotting-7618","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531"} I0513 11:51:26.758982 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 to node k8s-agentpool1-19417709-vmss000001. I0513 11:51:26.759038 1 azure_controller_common.go:453] azureDisk - find disk: lun 0 name pvc-29acf24c-213b-42e0-a55a-4794454e0531 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 I0513 11:51:26.759048 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 is already attached to node k8s-agentpool1-19417709-vmss000001 at lun 0. I0513 11:51:26.759080 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=8.8001e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:51:26.759112 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:51:26.894469 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-845575d0-1ed1-4a07-a174-225e0812bc56 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:51:26.934568 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9a440515-9872-450e-88f4-b2e7fab5c603 false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:51:27.313111 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae attached to node k8s-agentpool1-19417709-vmss000000. I0513 11:51:27.313153 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae to node k8s-agentpool1-19417709-vmss000000 successfully I0513 11:51:27.313183 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.302393547 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 11:51:27.313195 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:51:27.313243 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 lun 1 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 false 1})] I0513 11:51:27.313294 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 false 1})]) I0513 11:51:27.476926 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e02d0e8f-b5d4-4f7d-a278-022cad5a8378 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:51:34.417676 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:51:34.417703 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531"} I0513 11:51:34.417874 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 from node k8s-agentpool1-19417709-vmss000001 I0513 11:51:34.417904 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:51:36.333082 1 utils.go:77] GRPC call: /csi.v1.Identity/GetPluginInfo I0513 11:51:36.333107 1 utils.go:78] GRPC request: {} ... skipping 159 lines ... I0513 11:52:53.015911 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56 from node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:52:53.015940 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=68.443530772 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:52:53.015953 1 utils.go:84] GRPC response: {} I0513 11:52:53.016057 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:52:53.101955 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 false 0})] I0513 11:52:53.102017 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 false 0})]) I0513 11:52:53.311844 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:52:54.550529 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b StorageAccountType:StandardSSD_LRS Size:5 I0513 11:52:54.550583 1 controllerserver.go:258] create azure disk(pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b kubernetes.io-created-for-pvc-name:test.csi.azure.comq2qtz kubernetes.io-created-for-pvc-namespace:multivolume-1380]) successfully I0513 11:52:54.550629 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.421836663 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b" result_code="succeeded" I0513 11:52:54.550644 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b","csi.storage.k8s.io/pvc/name":"test.csi.azure.comq2qtz","csi.storage.k8s.io/pvc/namespace":"multivolume-1380","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b"}} I0513 11:52:55.310681 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:52:55.310705 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-845575d0-1ed1-4a07-a174-225e0812bc56","csi.storage.k8s.io/pvc/name":"test.csi.azure.comqqqmf","csi.storage.k8s.io/pvc/namespace":"snapshotting-6374","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56"} ... skipping 4 lines ... I0513 11:52:55.970251 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae from node k8s-agentpool1-19417709-vmss000000 successfully I0513 11:52:55.970292 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=40.817577233 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d808fb7-0de2-482f-8e84-5c3ae2cd4dae" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 11:52:55.970314 1 utils.go:84] GRPC response: {} I0513 11:52:55.970406 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000000, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:52:56.044068 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-29acf24c-213b-42e0-a55a-4794454e0531 false 0})] I0513 11:52:56.044129 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-29acf24c-213b-42e0-a55a-4794454e0531 false 0})]) I0513 11:52:56.278362 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-29acf24c-213b-42e0-a55a-4794454e0531 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:52:56.823639 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:52:56.823669 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b","csi.storage.k8s.io/pvc/name":"test.csi.azure.comq2qtz","csi.storage.k8s.io/pvc/namespace":"multivolume-1380","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b"} I0513 11:52:56.852049 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b to node k8s-agentpool1-19417709-vmss000002. I0513 11:52:56.852116 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b to node k8s-agentpool1-19417709-vmss000002 I0513 11:52:58.201881 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276:pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276]) returned with <nil> I0513 11:52:58.201936 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a) succeeded ... skipping 23 lines ... I0513 11:53:21.714178 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 attached to node k8s-agentpool1-19417709-vmss000000. I0513 11:53:21.714214 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531 to node k8s-agentpool1-19417709-vmss000000 successfully I0513 11:53:21.714254 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=62.400200905 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-29acf24c-213b-42e0-a55a-4794454e0531" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 11:53:21.714269 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:53:21.714327 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd lun 1 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd false 1})] I0513 11:53:21.714390 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd false 1})]) I0513 11:53:21.885444 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:53:23.560168 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 attached to node k8s-agentpool1-19417709-vmss000001. I0513 11:53:23.560212 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 to node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:53:23.560251 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=64.350946758 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:53:23.560268 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:53:23.560380 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a false 1})] I0513 11:53:23.560438 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a false 1})]) ... skipping 18 lines ... I0513 11:53:23.793314 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 from node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:53:23.793343 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=66.575381029 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:53:23.793355 1 utils.go:84] GRPC response: {} I0513 11:53:23.793401 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603 from node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603:pvc-9a440515-9872-450e-88f4-b2e7fab5c603] E0513 11:53:23.793478 1 azure_controller_vmss.go:171] detach azure disk on node(k8s-agentpool1-19417709-vmss000002): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603:pvc-9a440515-9872-450e-88f4-b2e7fab5c603]) not found I0513 11:53:23.793499 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603:pvc-9a440515-9872-450e-88f4-b2e7fab5c603]) I0513 11:53:23.794261 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:53:26.528883 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:53:26.528910 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-9a440515-9872-450e-88f4-b2e7fab5c603","csi.storage.k8s.io/pvc/name":"test.csi.azure.comtwcbs","csi.storage.k8s.io/pvc/namespace":"multivolume-6238","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603"} I0513 11:53:26.558390 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603 to node k8s-agentpool1-19417709-vmss000001. I0513 11:53:26.558441 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:53:26.614124 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603 to node k8s-agentpool1-19417709-vmss000001 I0513 11:53:29.482357 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume ... skipping 37 lines ... I0513 11:53:39.174397 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8) succeeded I0513 11:53:39.174452 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8 from node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:53:39.174481 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=46.343225166 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:53:39.174493 1 utils.go:84] GRPC response: {} I0513 11:53:39.174591 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-845575d0-1ed1-4a07-a174-225e0812bc56 false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b false 1})] I0513 11:53:39.174655 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-845575d0-1ed1-4a07-a174-225e0812bc56 false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b false 1})]) I0513 11:53:39.228539 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-1c4608aa-e382-4385-817f-0d76f7b18385:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1c4608aa-e382-4385-817f-0d76f7b18385 false 3}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9a440515-9872-450e-88f4-b2e7fab5c603:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-9a440515-9872-450e-88f4-b2e7fab5c603 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:53:39.448059 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-845575d0-1ed1-4a07-a174-225e0812bc56 false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:53:52.885225 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:53:52.885251 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd"} I0513 11:53:52.885355 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd from node k8s-agentpool1-19417709-vmss000000 I0513 11:53:53.544998 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:53:53.545021 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8"} I0513 11:53:53.545095 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b93cb395-cf1c-4973-8ef3-0b95be604aa8) ... skipping 62 lines ... I0513 11:54:22.849400 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:54:22.849426 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-732c7316-5ac3-4bfb-9793-31d7957c306f","csi.storage.k8s.io/pvc/name":"pvc-5c54q","csi.storage.k8s.io/pvc/namespace":"snapshotting-6374","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f"} I0513 11:54:22.876843 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f to node k8s-agentpool1-19417709-vmss000002. I0513 11:54:22.876896 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f to node k8s-agentpool1-19417709-vmss000002 I0513 11:54:22.876921 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f lun 2 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-732c7316-5ac3-4bfb-9793-31d7957c306f false 2})] I0513 11:54:22.876952 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-732c7316-5ac3-4bfb-9793-31d7957c306f false 2})]) I0513 11:54:23.118805 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-732c7316-5ac3-4bfb-9793-31d7957c306f false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:54:24.572994 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:54:24.573023 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56"} I0513 11:54:24.573154 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56 from node k8s-agentpool1-19417709-vmss000002 I0513 11:54:24.573191 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:54:32.951340 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd:pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd]) returned with <nil> I0513 11:54:32.951394 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b12ca56a-cae7-4ed9-a919-e9a94fdca8fd) succeeded ... skipping 10 lines ... I0513 11:54:33.205177 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f to node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:54:33.205209 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.328357879 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:54:33.205216 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56 from node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56:pvc-845575d0-1ed1-4a07-a174-225e0812bc56] I0513 11:54:33.205223 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"2"}} I0513 11:54:33.205271 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-845575d0-1ed1-4a07-a174-225e0812bc56 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56 I0513 11:54:33.205282 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-845575d0-1ed1-4a07-a174-225e0812bc56:pvc-845575d0-1ed1-4a07-a174-225e0812bc56]) I0513 11:54:33.210661 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:54:35.054185 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276:pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276]) returned with <nil> I0513 11:54:35.054247 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276) succeeded I0513 11:54:35.054261 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276 from node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:54:35.054295 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=59.12240704 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:54:35.054308 1 utils.go:84] GRPC response: {} I0513 11:54:35.106155 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume ... skipping 9 lines ... I0513 11:54:37.503964 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:54:37.503983 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a"} I0513 11:54:37.504063 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a from node k8s-agentpool1-19417709-vmss000001 I0513 11:54:38.746217 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:54:38.746242 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276"} I0513 11:54:38.746322 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276) I0513 11:54:38.746334 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276) since it's in attaching or detaching state I0513 11:54:38.746390 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.5001e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276" result_code="failed" E0513 11:54:38.746403 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276) since it's in attaching or detaching state I0513 11:54:39.849999 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteSnapshot I0513 11:54:39.850032 1 utils.go:78] GRPC request: {"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/snapshots/snapshot-150e95bf-2836-4e18-8ab9-4b84dfc737ab"} I0513 11:54:39.850128 1 controllerserver.go:899] begin to delete snapshot(snapshot-150e95bf-2836-4e18-8ab9-4b84dfc737ab) under rg(kubetest-s2gs5bqg) I0513 11:54:45.200813 1 controllerserver.go:904] delete snapshot(snapshot-150e95bf-2836-4e18-8ab9-4b84dfc737ab) under rg(kubetest-s2gs5bqg) successfully I0513 11:54:45.200868 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.350715218 request="azuredisk_csi_driver_controller_delete_snapshot" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" snapshot_id="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/snapshots/snapshot-150e95bf-2836-4e18-8ab9-4b84dfc737ab" result_code="succeeded" I0513 11:54:45.200889 1 utils.go:84] GRPC response: {} ... skipping 147 lines ... I0513 11:55:16.117214 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-1c4608aa-e382-4385-817f-0d76f7b18385, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1c4608aa-e382-4385-817f-0d76f7b18385) succeeded I0513 11:55:16.117221 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1c4608aa-e382-4385-817f-0d76f7b18385 from node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:55:16.117239 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=28.235440013 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1c4608aa-e382-4385-817f-0d76f7b18385" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:55:16.117247 1 utils.go:84] GRPC response: {} I0513 11:55:16.117273 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1cecea8a-770b-444c-906a-3bc65ba6431f lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-1cecea8a-770b-444c-906a-3bc65ba6431f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1cecea8a-770b-444c-906a-3bc65ba6431f false 0})] I0513 11:55:16.117292 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-1cecea8a-770b-444c-906a-3bc65ba6431f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1cecea8a-770b-444c-906a-3bc65ba6431f false 0})]) I0513 11:55:16.310392 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-1cecea8a-770b-444c-906a-3bc65ba6431f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-1cecea8a-770b-444c-906a-3bc65ba6431f false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:55:17.264150 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:55:17.264180 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b"} I0513 11:55:17.264257 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af64ce0d-af94-469c-a5ea-77665b3d7f8b) I0513 11:55:17.434120 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:55:17.434145 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1c4608aa-e382-4385-817f-0d76f7b18385"} I0513 11:55:17.434228 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1c4608aa-e382-4385-817f-0d76f7b18385) ... skipping 78 lines ... I0513 11:55:40.128568 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443 from node k8s-agentpool1-19417709-vmss000000 successfully I0513 11:55:40.128606 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=25.440923764 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 11:55:40.128615 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000000, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:55:40.128623 1 utils.go:84] GRPC response: {} I0513 11:55:40.201042 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60ace21c-9402-48c8-930a-79980d2d72ea false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f false 1})] I0513 11:55:40.201103 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60ace21c-9402-48c8-930a-79980d2d72ea false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f false 1})]) I0513 11:55:40.259365 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1 false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d476ce2-2562-4984-843d-5c560619f911 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:55:40.432358 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-60ace21c-9402-48c8-930a-79980d2d72ea false 0}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:55:40.858611 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a I0513 11:55:40.858645 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a) returned with <nil> I0513 11:55:40.858674 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=6.831455252 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-782de8a6-6fbf-4754-bc54-cdb69b37005a" result_code="succeeded" I0513 11:55:40.858690 1 utils.go:84] GRPC response: {} I0513 11:55:41.491797 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:55:41.491826 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8n4cr","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-702","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443"} ... skipping 3 lines ... I0513 11:55:41.561817 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1cecea8a-770b-444c-906a-3bc65ba6431f to node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:55:41.561854 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=30.474492086 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-1cecea8a-770b-444c-906a-3bc65ba6431f" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:55:41.561875 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:55:41.600451 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443 to node k8s-agentpool1-19417709-vmss000002 I0513 11:55:41.600501 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88d8d2de-0632-436d-a1a1-ab049189109c false 1})] I0513 11:55:41.600535 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88d8d2de-0632-436d-a1a1-ab049189109c false 1})]) I0513 11:55:41.793421 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88d8d2de-0632-436d-a1a1-ab049189109c false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:55:42.345301 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:55:42.345330 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f"} I0513 11:55:42.345411 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f) I0513 11:55:42.747864 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:55:42.747893 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276"} I0513 11:55:42.747979 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c7e9614-ed76-46c4-b296-18a79b1b7276) ... skipping 31 lines ... I0513 11:55:50.587518 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-60ace21c-9402-48c8-930a-79980d2d72ea","csi.storage.k8s.io/pvc/name":"test.csi.azure.combmn82","csi.storage.k8s.io/pvc/namespace":"multivolume-3848","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea"} I0513 11:55:50.625253 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea to node k8s-agentpool1-19417709-vmss000000. I0513 11:55:50.625316 1 azure_controller_common.go:453] azureDisk - find disk: lun 0 name pvc-60ace21c-9402-48c8-930a-79980d2d72ea uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea I0513 11:55:50.625327 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea is already attached to node k8s-agentpool1-19417709-vmss000000 at lun 0. I0513 11:55:50.625378 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.000103801 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-60ace21c-9402-48c8-930a-79980d2d72ea" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 11:55:50.625401 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:55:50.651485 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:55:50.907459 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 11:55:50.907488 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817","parameters":{"csi.storage.k8s.io/pv/name":"pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817","csi.storage.k8s.io/pvc/name":"inline-volume-tester-5md7z-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-9446"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 11:55:50.907627 1 controllerserver.go:174] begin to create azure disk(pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 11:55:50.907643 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 StorageAccountType:StandardSSD_LRS Size:5 I0513 11:55:51.923045 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c attached to node k8s-agentpool1-19417709-vmss000001. I0513 11:55:51.923092 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c to node k8s-agentpool1-19417709-vmss000001 successfully ... skipping 50 lines ... I0513 11:56:15.023554 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c from node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:56:15.023587 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.443917286 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:56:15.023600 1 utils.go:84] GRPC response: {} I0513 11:56:15.023666 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:56:15.114153 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-ad5da237-734b-4aec-83df-52406e513ce7 lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-ad5da237-734b-4aec-83df-52406e513ce7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ad5da237-734b-4aec-83df-52406e513ce7 false 1})] I0513 11:56:15.114298 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-ad5da237-734b-4aec-83df-52406e513ce7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ad5da237-734b-4aec-83df-52406e513ce7 false 1})]) I0513 11:56:15.292674 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-ad5da237-734b-4aec-83df-52406e513ce7:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-ad5da237-734b-4aec-83df-52406e513ce7 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:56:15.600158 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:56:15.600193 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-88d8d2de-0632-436d-a1a1-ab049189109c","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2qjhd","csi.storage.k8s.io/pvc/namespace":"snapshotting-6847","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c"} I0513 11:56:15.627679 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c to node k8s-agentpool1-19417709-vmss000000. I0513 11:56:15.627740 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c to node k8s-agentpool1-19417709-vmss000000 I0513 11:56:15.627765 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c lun 2 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88d8d2de-0632-436d-a1a1-ab049189109c false 2})] I0513 11:56:15.627797 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88d8d2de-0632-436d-a1a1-ab049189109c false 2})]) I0513 11:56:15.842826 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-88d8d2de-0632-436d-a1a1-ab049189109c false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:56:15.879608 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1 attached to node k8s-agentpool1-19417709-vmss000002. I0513 11:56:15.879647 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:56:15.879686 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=60.119730381 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:56:15.879684 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f from node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f:pvc-732c7316-5ac3-4bfb-9793-31d7957c306f] I0513 11:56:15.879699 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} E0513 11:56:15.879738 1 azure_controller_vmss.go:171] detach azure disk on node(k8s-agentpool1-19417709-vmss000002): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f:pvc-732c7316-5ac3-4bfb-9793-31d7957c306f]) not found ... skipping 18 lines ... I0513 11:56:26.178753 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f from node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:56:26.178788 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=46.138289806 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-732c7316-5ac3-4bfb-9793-31d7957c306f" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:56:26.178805 1 utils.go:84] GRPC response: {} I0513 11:56:26.178875 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:56:26.254800 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443 lun 2 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 false 3})] I0513 11:56:26.254865 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 false 3})]) I0513 11:56:26.476879 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:56:32.448030 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 11:56:32.448058 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c","csi.storage.k8s.io/pvc/name":"pvc-w9rpm","csi.storage.k8s.io/pvc/namespace":"snapshotting-6847"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}],"volume_content_source":{"Type":{"Snapshot":{"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/snapshots/snapshot-f677fef1-ed66-4938-a53f-68b367933e5c"}}}} I0513 11:56:32.448227 1 controllerserver.go:174] begin to create azure disk(pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 11:56:32.448245 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c StorageAccountType:StandardSSD_LRS Size:5 I0513 11:56:34.798922 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c StorageAccountType:StandardSSD_LRS Size:5 I0513 11:56:34.798978 1 controllerserver.go:258] create azure disk(pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c kubernetes.io-created-for-pvc-name:pvc-w9rpm kubernetes.io-created-for-pvc-namespace:snapshotting-6847]) successfully ... skipping 11 lines ... I0513 11:56:36.126007 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:56:36.126032 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f"} I0513 11:56:36.126146 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f from node k8s-agentpool1-19417709-vmss000000 I0513 11:56:36.130744 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:56:36.130763 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c"} I0513 11:56:36.130853 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-88d8d2de-0632-436d-a1a1-ab049189109c from node k8s-agentpool1-19417709-vmss000000 I0513 11:56:36.135906 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:56:37.453327 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:56:37.453356 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911"} I0513 11:56:37.453473 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911 from node k8s-agentpool1-19417709-vmss000002 I0513 11:56:37.453505 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:56:37.454255 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:56:37.454276 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1"} ... skipping 93 lines ... I0513 11:57:08.427904 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:57:08.427936 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8d476ce2-2562-4984-843d-5c560619f911","csi.storage.k8s.io/pvc/name":"test.csi.azure.comgbqkf","csi.storage.k8s.io/pvc/namespace":"multivolume-3140","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911"} I0513 11:57:08.464457 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911 to node k8s-agentpool1-19417709-vmss000001. I0513 11:57:08.464515 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911 to node k8s-agentpool1-19417709-vmss000001 I0513 11:57:08.464538 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911 lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d476ce2-2562-4984-843d-5c560619f911 false 0})] I0513 11:57:08.464560 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d476ce2-2562-4984-843d-5c560619f911 false 0})]) I0513 11:57:08.666558 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d476ce2-2562-4984-843d-5c560619f911 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:57:11.215892 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:57:11.215922 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-ad5da237-734b-4aec-83df-52406e513ce7"} I0513 11:57:11.216006 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-ad5da237-734b-4aec-83df-52406e513ce7) I0513 11:57:12.120919 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f:pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f]) returned with <nil> I0513 11:57:12.120976 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f) succeeded I0513 11:57:12.120995 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-af0d4fbf-c1d1-4c5c-8ea4-f31f4dd5161f from node k8s-agentpool1-19417709-vmss000000 successfully ... skipping 31 lines ... I0513 11:57:23.840851 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911 attached to node k8s-agentpool1-19417709-vmss000001. I0513 11:57:23.841499 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911 to node k8s-agentpool1-19417709-vmss000001 successfully I0513 11:57:23.841540 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.37707239 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d476ce2-2562-4984-843d-5c560619f911" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:57:23.841010 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b3179044-fb86-4592-9938-d6824d0f56d4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b3179044-fb86-4592-9938-d6824d0f56d4 false 2})] I0513 11:57:23.841558 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 11:57:23.841622 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b3179044-fb86-4592-9938-d6824d0f56d4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b3179044-fb86-4592-9938-d6824d0f56d4 false 2})]) I0513 11:57:24.062074 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 false 1}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b3179044-fb86-4592-9938-d6824d0f56d4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b3179044-fb86-4592-9938-d6824d0f56d4 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:57:24.315168 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteSnapshot I0513 11:57:24.315194 1 utils.go:78] GRPC request: {"snapshot_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/snapshots/snapshot-f677fef1-ed66-4938-a53f-68b367933e5c"} I0513 11:57:24.315294 1 controllerserver.go:899] begin to delete snapshot(snapshot-f677fef1-ed66-4938-a53f-68b367933e5c) under rg(kubetest-s2gs5bqg) I0513 11:57:26.956164 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:57:26.956191 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c"} I0513 11:57:26.956291 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c from node k8s-agentpool1-19417709-vmss000000 ... skipping 10 lines ... I0513 11:57:27.542627 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443) succeeded I0513 11:57:27.542669 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443 from node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:57:27.542702 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=29.583373785 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-82dd5b64-3a91-4a0b-8680-f7deb33e5443" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:57:27.542714 1 utils.go:84] GRPC response: {} I0513 11:57:27.542786 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d0390812-3313-4b9b-9363-b983cb00cc15 false 0})] I0513 11:57:27.542834 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d0390812-3313-4b9b-9363-b983cb00cc15 false 0})]) I0513 11:57:27.745048 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d0390812-3313-4b9b-9363-b983cb00cc15 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:57:28.274006 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:57:28.274033 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1","csi.storage.k8s.io/pvc/name":"test.csi.azure.comb9kbg","csi.storage.k8s.io/pvc/namespace":"multivolume-3140","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1"} I0513 11:57:28.300620 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1 to node k8s-agentpool1-19417709-vmss000001. I0513 11:57:28.300670 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1 to node k8s-agentpool1-19417709-vmss000001 I0513 11:57:28.682216 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:57:28.682243 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817"} ... skipping 47 lines ... I0513 11:57:57.419522 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000000, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:57:57.429320 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:57:57.429344 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c"} I0513 11:57:57.429458 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c from node k8s-agentpool1-19417709-vmss000000 I0513 11:57:57.544946 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26 lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26 false 0})] I0513 11:57:57.545007 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26 false 0})]) I0513 11:57:57.803853 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:57:57.971707 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15 attached to node k8s-agentpool1-19417709-vmss000002. I0513 11:57:57.971757 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 11:57:57.971806 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=54.506409574 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 11:57:57.971814 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 from node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817:pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817] I0513 11:57:57.971863 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 I0513 11:57:57.971891 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817:pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817]) ... skipping 11 lines ... I0513 11:57:59.340153 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2","csi.storage.k8s.io/pvc/name":"inline-volume-tester-qd9m9-my-volume-1","csi.storage.k8s.io/pvc/namespace":"ephemeral-6998","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2"} I0513 11:57:59.366418 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 to node k8s-agentpool1-19417709-vmss000001. I0513 11:57:59.366474 1 azure_controller_common.go:453] azureDisk - find disk: lun 1 name pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 I0513 11:57:59.366482 1 controllerserver.go:375] Attach operation is successful. volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2 is already attached to node k8s-agentpool1-19417709-vmss000001 at lun 1. I0513 11:57:59.366516 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=8.7301e-05 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 11:57:59.366534 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 11:57:59.531518 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-435ab9c5-1ee1-4ea8-9ded-691503086ca1 false 3})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:57:59.652362 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c StorageAccountType:StandardSSD_LRS Size:5 I0513 11:57:59.652415 1 controllerserver.go:258] create azure disk(pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c kubernetes.io-created-for-pvc-name:test.csi.azure.comtpb26 kubernetes.io-created-for-pvc-namespace:provisioning-8205]) successfully I0513 11:57:59.652459 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.533141601 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c" result_code="succeeded" I0513 11:57:59.652473 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c","csi.storage.k8s.io/pvc/name":"test.csi.azure.comtpb26","csi.storage.k8s.io/pvc/namespace":"provisioning-8205","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c"}} I0513 11:58:01.824573 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 11:58:01.824599 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c","csi.storage.k8s.io/pvc/name":"test.csi.azure.comtpb26","csi.storage.k8s.io/pvc/namespace":"provisioning-8205","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c"} ... skipping 47 lines ... I0513 11:58:18.465459 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:58:18.476086 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:58:18.476113 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817"} I0513 11:58:18.476223 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8bf2cd9b-c92e-4a8d-ab81-a94e7c9d4817 from node k8s-agentpool1-19417709-vmss000002 I0513 11:58:18.722639 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20d6f012-6568-478e-adac-06b161325a5c false 1})] I0513 11:58:18.722693 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20d6f012-6568-478e-adac-06b161325a5c false 1})]) I0513 11:58:18.937855 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-20d6f012-6568-478e-adac-06b161325a5c false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:58:19.878832 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:58:19.878861 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15"} I0513 11:58:19.878985 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15 from node k8s-agentpool1-19417709-vmss000002 I0513 11:58:19.879018 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:58:23.382724 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c:pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c]) returned with <nil> I0513 11:58:23.382780 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c) succeeded I0513 11:58:23.382803 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c from node k8s-agentpool1-19417709-vmss000000 successfully I0513 11:58:23.382840 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=25.953359375 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 11:58:23.382854 1 utils.go:84] GRPC response: {} I0513 11:58:23.382869 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000000, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 11:58:23.492094 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c lun 1 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c false 1})] I0513 11:58:23.492164 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c false 1})]) I0513 11:58:23.694230 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 11:58:26.101159 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 11:58:26.101190 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c"} I0513 11:58:26.101279 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f7db8298-5a9e-4d9b-a3aa-1d8afb8b715c) I0513 11:58:27.842283 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 11:58:27.842310 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26"} I0513 11:58:27.842458 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c41fe705-c0a9-4cb8-9662-94686ea2ff26 from node k8s-agentpool1-19417709-vmss000000 ... skipping 133 lines ... I0513 12:00:20.277991 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15 from node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:00:20.278021 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=120.399016478 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:00:20.278032 1 utils.go:84] GRPC response: {} I0513 12:00:20.278124 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:00:20.383156 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 false 0})] I0513 12:00:20.383221 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 false 0})]) I0513 12:00:20.563954 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:00:25.965639 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:00:25.965671 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15"} I0513 12:00:25.965757 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15) I0513 12:00:31.262523 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15 I0513 12:00:31.262557 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15) returned with <nil> I0513 12:00:31.262585 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.296813459 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d0390812-3313-4b9b-9363-b983cb00cc15" result_code="succeeded" ... skipping 36 lines ... I0513 12:00:40.968590 1 utils.go:84] GRPC response: {} I0513 12:00:43.403178 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:00:43.403208 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-909bc1fe-658c-46d9-b72d-8738679b315c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-909bc1fe-658c-46d9-b72d-8738679b315c","csi.storage.k8s.io/pvc/name":"test.csi.azure.comf7lhx","csi.storage.k8s.io/pvc/namespace":"multivolume-2781"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}}]} I0513 12:00:43.403365 1 controllerserver.go:174] begin to create azure disk(pvc-909bc1fe-658c-46d9-b72d-8738679b315c) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 12:00:43.403390 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-909bc1fe-658c-46d9-b72d-8738679b315c StorageAccountType:StandardSSD_LRS Size:5 I0513 12:00:53.339123 1 azure_armclient.go:135] response is empty I0513 12:00:53.339185 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 12:00:53.339202 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 12:00:53.339213 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 12:00:53.339280 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000829017 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 12:00:53.339304 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 12:00:54.340988 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:00:54.341023 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65","parameters":{"csi.storage.k8s.io/pv/name":"pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65","csi.storage.k8s.io/pvc/name":"test.csi.azure.com9vvjm","csi.storage.k8s.io/pvc/namespace":"multivolume-9136"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 12:00:54.341214 1 controllerserver.go:174] begin to create azure disk(pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 12:00:54.341235 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65 StorageAccountType:StandardSSD_LRS Size:5 I0513 12:00:55.819098 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:00:55.819139 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665 to node k8s-agentpool1-19417709-vmss000002 successfully ... skipping 6 lines ... I0513 12:00:55.819344 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d848db6-1892-4550-af44-bdae33e418a6 lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d848db6-1892-4550-af44-bdae33e418a6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d848db6-1892-4550-af44-bdae33e418a6 false 1})] I0513 12:00:55.819374 1 utils.go:84] GRPC response: {} I0513 12:00:55.819387 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d848db6-1892-4550-af44-bdae33e418a6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d848db6-1892-4550-af44-bdae33e418a6 false 1})]) I0513 12:00:55.830769 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:00:55.830791 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c"} I0513 12:00:55.830893 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c from node k8s-agentpool1-19417709-vmss000002 I0513 12:00:56.028132 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d848db6-1892-4550-af44-bdae33e418a6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d848db6-1892-4550-af44-bdae33e418a6 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:00:58.403370 1 azure_armclient.go:153] Send.sendRequest original response: {"error":{"code":"InternalServerError","message":"Encountered internal server error. Diagnostic information: timestamp '20220513T120053Z', subscription id '0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e', tracking id 'e5f4e774-b448-489a-a4db-fa0e90bd4d70', request correlation id 'e5f4e774-b448-489a-a4db-fa0e90bd4d70'."}} I0513 12:00:58.403397 1 azure_armclient.go:158] Send.sendRequest: response body does not contain ResourceGroupNotFound error code. Skip retrying regional host I0513 12:00:58.403430 1 azure_armclient.go:320] Received error in sendAsync.send: resourceID: https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c?api-version=2021-04-01, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 12:00:58.403457 1 azure_armclient.go:511] Received error in put.send: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 12:00:58.403478 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 12:00:58.403528 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000120192 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 12:00:58.403549 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 500, RawError: context canceled I0513 12:00:59.140031 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:00:59.140060 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b3179044-fb86-4592-9938-d6824d0f56d4"} I0513 12:00:59.140136 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b3179044-fb86-4592-9938-d6824d0f56d4) I0513 12:00:59.168831 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:00:59.168857 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2"} I0513 12:00:59.168941 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0c610794-ef36-4bb0-8b7f-a70df0efcbf2) ... skipping 54 lines ... I0513 12:01:11.885330 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c to node k8s-agentpool1-19417709-vmss000001. I0513 12:01:11.885383 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c to node k8s-agentpool1-19417709-vmss000001 I0513 12:01:11.885388 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 to node k8s-agentpool1-19417709-vmss000001. I0513 12:01:11.885408 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-909bc1fe-658c-46d9-b72d-8738679b315c false 0})] I0513 12:01:11.885424 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 to node k8s-agentpool1-19417709-vmss000001 I0513 12:01:11.885441 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-909bc1fe-658c-46d9-b72d-8738679b315c false 0})]) I0513 12:01:12.149239 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-909bc1fe-658c-46d9-b72d-8738679b315c false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:01:21.230373 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b StorageAccountType:StandardSSD_LRS Size:5 I0513 12:01:21.230429 1 controllerserver.go:258] create azure disk(pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b kubernetes.io-created-for-pvc-name:test.csi.azure.com4svpr kubernetes.io-created-for-pvc-namespace:multivolume-8840]) successfully I0513 12:01:21.230468 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=12.317309278 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b" result_code="succeeded" I0513 12:01:21.230482 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4svpr","csi.storage.k8s.io/pvc/namespace":"multivolume-8840","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b"}} I0513 12:01:22.204752 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:01:22.204795 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4svpr","csi.storage.k8s.io/pvc/namespace":"multivolume-8840","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b"} ... skipping 27 lines ... I0513 12:01:27.280212 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c attached to node k8s-agentpool1-19417709-vmss000001. I0513 12:01:27.280248 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c to node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:01:27.280297 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.394947099 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-909bc1fe-658c-46d9-b72d-8738679b315c" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 12:01:27.280312 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 12:01:27.280417 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 false 1})] I0513 12:01:27.280474 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 false 1})]) I0513 12:01:27.343213 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:01:27.511712 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:01:32.611383 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:01:32.611412 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c"} I0513 12:01:32.611490 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c) I0513 12:01:32.611509 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c) since it's in attaching or detaching state I0513 12:01:32.611558 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=4.2101e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c" result_code="failed" E0513 12:01:32.611575 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c) since it's in attaching or detaching state I0513 12:01:34.013777 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:01:34.013805 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d848db6-1892-4550-af44-bdae33e418a6"} I0513 12:01:34.013915 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d848db6-1892-4550-af44-bdae33e418a6 from node k8s-agentpool1-19417709-vmss000002 I0513 12:01:41.610045 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c:pvc-20d6f012-6568-478e-adac-06b161325a5c /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665:pvc-eee22f5e-dfb9-457f-9ff0-236809a3a665]) returned with <nil> I0513 12:01:41.610111 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-20d6f012-6568-478e-adac-06b161325a5c, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c) succeeded I0513 12:01:41.610140 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c from node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:01:41.610175 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=45.779259908 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-20d6f012-6568-478e-adac-06b161325a5c" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:01:41.610194 1 utils.go:84] GRPC response: {} I0513 12:01:41.610278 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:01:41.703906 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-54f69cf6-6e93-458b-be64-874f0ef27cf4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-54f69cf6-6e93-458b-be64-874f0ef27cf4 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65 false 0})] I0513 12:01:41.703971 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-54f69cf6-6e93-458b-be64-874f0ef27cf4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-54f69cf6-6e93-458b-be64-874f0ef27cf4 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65 false 0})]) I0513 12:01:41.912471 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-54f69cf6-6e93-458b-be64-874f0ef27cf4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-54f69cf6-6e93-458b-be64-874f0ef27cf4 false 2}) /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-7ebbfa3b-805a-43f2-bc66-6bc1aa0fad65 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:01:42.492448 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b attached to node k8s-agentpool1-19417709-vmss000000. I0513 12:01:42.492488 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b to node k8s-agentpool1-19417709-vmss000000 successfully I0513 12:01:42.492527 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.263488366 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 12:01:42.492522 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c from node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c:pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c] E0513 12:01:42.492567 1 azure_controller_vmss.go:171] detach azure disk on node(k8s-agentpool1-19417709-vmss000000): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c:pvc-6d3cd30c-8da7-4a50-8013-e5e81482874c]) not found I0513 12:01:42.492544 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} ... skipping 68 lines ... I0513 12:02:07.929510 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4"} I0513 12:02:07.929689 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-d1556ca0-32ad-4011-a9e3-8663c6976cc4 from node k8s-agentpool1-19417709-vmss000001 I0513 12:02:08.923102 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:02:08.923130 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a","parameters":{"csi.storage.k8s.io/pv/name":"pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a","csi.storage.k8s.io/pvc/name":"test.csi.azure.comsxkl5","csi.storage.k8s.io/pvc/namespace":"volume-7090"},"volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":7}}]} I0513 12:02:08.923301 1 controllerserver.go:174] begin to create azure disk(pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 12:02:08.923318 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a StorageAccountType:StandardSSD_LRS Size:5 I0513 12:02:10.509507 1 azure_armclient.go:289] Received error in WaitForCompletionRef: 'context canceled' I0513 12:02:10.509538 1 azure_armclient.go:310] Received error in WaitForAsyncOperationCompletion: 'context canceled' I0513 12:02:10.509552 1 azure_armclient.go:520] Received error in WaitForAsyncOperationResult: 'context canceled', no response I0513 12:02:10.509569 1 azure_diskclient.go:201] Received error in disk.put.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c9a5d107-da93-4b8d-bbcb-b7029729e78f, error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 12:02:10.509626 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=14.975397018 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="" result_code="failed" E0513 12:02:10.509652 1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: context canceled I0513 12:02:11.415960 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a StorageAccountType:StandardSSD_LRS Size:5 I0513 12:02:11.416007 1 controllerserver.go:258] create azure disk(pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a kubernetes.io-created-for-pvc-name:test.csi.azure.comsxkl5 kubernetes.io-created-for-pvc-namespace:volume-7090]) successfully I0513 12:02:11.416044 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.492707305 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a" result_code="succeeded" I0513 12:02:11.416058 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a","csi.storage.k8s.io/pvc/name":"test.csi.azure.comsxkl5","csi.storage.k8s.io/pvc/namespace":"volume-7090","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a9d050a7-47e4-4ab3-816a-e83ad5e9ac1a"}} I0513 12:02:11.514686 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:02:11.514708 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-c9a5d107-da93-4b8d-bbcb-b7029729e78f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-c9a5d107-da93-4b8d-bbcb-b7029729e78f","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4svpr-cloned","csi.storage.k8s.io/pvc/namespace":"multivolume-8840"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}}],"volume_content_source":{"Type":{"Volume":{"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-efcf803b-abd8-46ca-b8f1-57337e967d2b"}}}} ... skipping 5455 lines ... I0513 12:31:35.297333 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:31:35.297363 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4","csi.storage.k8s.io/pvc/name":"test.csi.azure.com7xj5k","csi.storage.k8s.io/pvc/namespace":"provisioning-9318","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4"} I0513 12:31:35.348089 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4 to node k8s-agentpool1-19417709-vmss000000. I0513 12:31:35.391480 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4 to node k8s-agentpool1-19417709-vmss000000 I0513 12:31:35.391534 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4 lun 2 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4 false 2})] I0513 12:31:35.391561 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4 false 2})]) I0513 12:31:35.593359 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-6dcbaa31-f77b-4f33-8383-2b25ca703ba4 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:31:39.793278 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:31:39.793304 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-2dabd955-679d-4426-9fdc-96674124f90e","parameters":{"csi.storage.k8s.io/pv/name":"pvc-2dabd955-679d-4426-9fdc-96674124f90e","csi.storage.k8s.io/pvc/name":"test.csi.azure.comkjqrq","csi.storage.k8s.io/pvc/namespace":"volumemode-1665"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 12:31:39.793461 1 controllerserver.go:174] begin to create azure disk(pvc-2dabd955-679d-4426-9fdc-96674124f90e) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 12:31:39.793481 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-2dabd955-679d-4426-9fdc-96674124f90e StorageAccountType:StandardSSD_LRS Size:5 I0513 12:31:42.319649 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-2dabd955-679d-4426-9fdc-96674124f90e StorageAccountType:StandardSSD_LRS Size:5 I0513 12:31:42.319690 1 controllerserver.go:258] create azure disk(pvc-2dabd955-679d-4426-9fdc-96674124f90e) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-2dabd955-679d-4426-9fdc-96674124f90e kubernetes.io-created-for-pvc-name:test.csi.azure.comkjqrq kubernetes.io-created-for-pvc-namespace:volumemode-1665]) successfully ... skipping 2 lines ... I0513 12:31:44.473635 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:31:44.473665 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2dabd955-679d-4426-9fdc-96674124f90e","csi.storage.k8s.io/pvc/name":"test.csi.azure.comkjqrq","csi.storage.k8s.io/pvc/namespace":"volumemode-1665","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e"} I0513 12:31:44.501113 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e to node k8s-agentpool1-19417709-vmss000002. I0513 12:31:44.501182 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e to node k8s-agentpool1-19417709-vmss000002 I0513 12:31:44.501208 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2dabd955-679d-4426-9fdc-96674124f90e false 0})] I0513 12:31:44.501231 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2dabd955-679d-4426-9fdc-96674124f90e false 0})]) I0513 12:31:44.673160 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2dabd955-679d-4426-9fdc-96674124f90e false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:31:51.830036 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:31:51.830065 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b49cbb9e-b793-4dab-9d7f-92cdca74fe07"} I0513 12:31:51.830181 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b49cbb9e-b793-4dab-9d7f-92cdca74fe07 from node k8s-agentpool1-19417709-vmss000000 I0513 12:31:51.830210 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000000, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:31:51.833302 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:31:51.833320 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-6af577d2-4eff-4b48-80fe-f955e972daab"} ... skipping 28 lines ... I0513 12:31:56.482751 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d from node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:31:56.482777 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=31.227735521 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 12:31:56.482787 1 utils.go:84] GRPC response: {} I0513 12:31:56.482886 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:31:56.580035 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797 lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2563af11-64fc-4f30-a276-f4793b5a9797 false 1})] I0513 12:31:56.580104 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2563af11-64fc-4f30-a276-f4793b5a9797 false 1})]) I0513 12:31:56.837531 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-2563af11-64fc-4f30-a276-f4793b5a9797 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:31:58.017797 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:31:58.017828 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext3"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-22048ed1-686f-4b16-a1a6-632691b9d40d","csi.storage.k8s.io/pvc/name":"test.csi.azure.comh8hl9","csi.storage.k8s.io/pvc/namespace":"volume-4218","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d"} I0513 12:31:58.058171 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d to node k8s-agentpool1-19417709-vmss000002. I0513 12:31:58.058228 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d to node k8s-agentpool1-19417709-vmss000002 I0513 12:32:11.091296 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-6af577d2-4eff-4b48-80fe-f955e972daab:pvc-6af577d2-4eff-4b48-80fe-f955e972daab /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b49cbb9e-b793-4dab-9d7f-92cdca74fe07:pvc-b49cbb9e-b793-4dab-9d7f-92cdca74fe07]) returned with <nil> I0513 12:32:11.091367 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-b49cbb9e-b793-4dab-9d7f-92cdca74fe07, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b49cbb9e-b793-4dab-9d7f-92cdca74fe07) succeeded ... skipping 11 lines ... I0513 12:32:11.229852 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e from node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:32:11.229881 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.338152868 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2dabd955-679d-4426-9fdc-96674124f90e" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:32:11.229897 1 utils.go:84] GRPC response: {} I0513 12:32:11.229974 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:32:11.300051 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22048ed1-686f-4b16-a1a6-632691b9d40d false 0})] I0513 12:32:11.300114 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22048ed1-686f-4b16-a1a6-632691b9d40d false 0})]) I0513 12:32:11.511590 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-22048ed1-686f-4b16-a1a6-632691b9d40d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:32:11.963138 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797 attached to node k8s-agentpool1-19417709-vmss000001. I0513 12:32:11.963185 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797 to node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:32:11.963218 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=39.438489447 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 12:32:11.963627 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:32:11.970524 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:32:11.970548 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-2563af11-64fc-4f30-a276-f4793b5a9797","csi.storage.k8s.io/pvc/name":"volume-limits-cwz99-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-5882","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797"} ... skipping 46 lines ... I0513 12:32:35.218255 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:32:35.218293 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-8d683589-1452-4a3d-a05d-a7ce46d02177","csi.storage.k8s.io/pvc/name":"pvc-7hgpd","csi.storage.k8s.io/pvc/namespace":"provisioning-7569","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177"} I0513 12:32:35.262819 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 to node k8s-agentpool1-19417709-vmss000002. I0513 12:32:35.262872 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 to node k8s-agentpool1-19417709-vmss000002 I0513 12:32:35.262896 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 false 1})] I0513 12:32:35.262929 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 false 1})]) I0513 12:32:35.451540 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:32:38.646978 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:32:38.647012 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6","parameters":{"csi.storage.k8s.io/pv/name":"pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4q4qg","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-5192"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} I0513 12:32:38.647242 1 controllerserver.go:174] begin to create azure disk(pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) diskZone() maxShares(0) I0513 12:32:38.647269 1 azure_managedDiskController.go:92] azureDisk - creating new managed Name:pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 StorageAccountType:StandardSSD_LRS Size:5 I0513 12:32:41.124769 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 StorageAccountType:StandardSSD_LRS Size:5 I0513 12:32:41.124844 1 controllerserver.go:258] create azure disk(pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 kubernetes.io-created-for-pvc-name:test.csi.azure.com4q4qg kubernetes.io-created-for-pvc-namespace:fsgroupchangepolicy-5192]) successfully ... skipping 14 lines ... I0513 12:32:45.540999 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:32:45.541051 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 lun 2 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 false 2})] I0513 12:32:45.541108 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 false 2})]) I0513 12:32:45.607290 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:32:45.607317 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d"} I0513 12:32:45.607432 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-22048ed1-686f-4b16-a1a6-632691b9d40d from node k8s-agentpool1-19417709-vmss000002 I0513 12:32:45.709189 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:32:46.452867 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5 StorageAccountType:StandardSSD_LRS Size:5 I0513 12:32:46.452918 1 controllerserver.go:258] create azure disk(pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(5) tags(map[kubernetes.io-created-for-pv-name:pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5 kubernetes.io-created-for-pvc-name:volume-limits-exceeded-z4vc2-my-volume kubernetes.io-created-for-pvc-namespace:volumelimits-5882]) successfully I0513 12:32:46.452956 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.408208301 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5" result_code="succeeded" I0513 12:32:46.452971 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":5368709120,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5","csi.storage.k8s.io/pvc/name":"volume-limits-exceeded-z4vc2-my-volume","csi.storage.k8s.io/pvc/namespace":"volumelimits-5882","requestedsizegib":"5"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5"}} I0513 12:32:50.294865 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:32:50.294898 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-46e9fed8-ea86-4871-82f7-ef95d8ef7ab5"} ... skipping 140 lines ... I0513 12:34:21.923753 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:34:21.934500 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:34:21.934520 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177"} I0513 12:34:21.934632 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 from node k8s-agentpool1-19417709-vmss000002 I0513 12:34:22.018477 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12 false 0})] I0513 12:34:22.018531 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12 false 0})]) I0513 12:34:22.270634 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:34:22.865959 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-5de65fe3-44c3-4f87-9c61-f3893a10d74d:pvc-5de65fe3-44c3-4f87-9c61-f3893a10d74d]) returned with <nil> I0513 12:34:22.866016 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-5de65fe3-44c3-4f87-9c61-f3893a10d74d, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5de65fe3-44c3-4f87-9c61-f3893a10d74d) succeeded I0513 12:34:22.866036 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5de65fe3-44c3-4f87-9c61-f3893a10d74d from node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:34:22.866064 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=55.592996728 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5de65fe3-44c3-4f87-9c61-f3893a10d74d" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 12:34:22.866075 1 utils.go:84] GRPC response: {} I0513 12:34:22.866157 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570 from node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-049413dc-6421-4bee-b5e7-d872edd7c896:pvc-049413dc-6421-4bee-b5e7-d872edd7c896 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0a51a3aa-63a4-4348-b18b-dd53aff0d30e:pvc-0a51a3aa-63a4-4348-b18b-dd53aff0d30e /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-14f1c368-fa79-4a09-b53d-1fe137234e1b:pvc-14f1c368-fa79-4a09-b53d-1fe137234e1b /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797:pvc-2563af11-64fc-4f30-a276-f4793b5a9797 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-444ace27-0eb1-4519-b38f-4225f489db2c:pvc-444ace27-0eb1-4519-b38f-4225f489db2c /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-79bf61a4-5f0d-46d5-9a46-76752bc5dbcd:pvc-79bf61a4-5f0d-46d5-9a46-76752bc5dbcd /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8cf53980-d45e-4b6c-af7c-261c23f7dea0:pvc-8cf53980-d45e-4b6c-af7c-261c23f7dea0 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-99148797-c105-4ab1-b0da-f16fe3a7c597:pvc-99148797-c105-4ab1-b0da-f16fe3a7c597 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9d2a88f5-b241-481f-bcc2-dfe481addba7:pvc-9d2a88f5-b241-481f-bcc2-dfe481addba7 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570:pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-add5c2e3-647a-4b8f-8e00-595d9e72f609:pvc-add5c2e3-647a-4b8f-8e00-595d9e72f609 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-af9c14c0-b857-4d6d-a274-06f705e4b97d:pvc-af9c14c0-b857-4d6d-a274-06f705e4b97d /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e1e94e61-e350-46cc-a280-20b4a86f3f51:pvc-e1e94e61-e350-46cc-a280-20b4a86f3f51 /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e4f0190c-18ab-44a7-b9f2-2e1faea77bfe:pvc-e4f0190c-18ab-44a7-b9f2-2e1faea77bfe /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-eb827c47-5607-4df5-8937-6a4070a86394:pvc-eb827c47-5607-4df5-8937-6a4070a86394] ... skipping 108 lines ... I0513 12:35:38.600650 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-8d683589-1452-4a3d-a05d-a7ce46d02177, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177) succeeded I0513 12:35:38.600696 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177 from node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:35:38.600725 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=76.666077797 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8d683589-1452-4a3d-a05d-a7ce46d02177" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:35:38.600737 1 utils.go:84] GRPC response: {} I0513 12:35:38.600869 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad false 0})] I0513 12:35:38.600912 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad false 0})]) I0513 12:35:38.858211 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:35:38.988056 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570 I0513 12:35:38.988101 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570) returned with <nil> I0513 12:35:38.988147 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.269048768 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570" result_code="succeeded" I0513 12:35:38.988169 1 utils.go:84] GRPC response: {} I0513 12:35:39.460465 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570:pvc-9f7628d1-1966-4ece-a9ff-afe496c1e570]) returned with <nil> I0513 12:35:39.460523 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-add5c2e3-647a-4b8f-8e00-595d9e72f609, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-add5c2e3-647a-4b8f-8e00-595d9e72f609) succeeded ... skipping 98 lines ... I0513 12:36:00.630452 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-99148797-c105-4ab1-b0da-f16fe3a7c597"} I0513 12:36:00.630492 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797"} I0513 12:36:00.630518 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-14f1c368-fa79-4a09-b53d-1fe137234e1b) I0513 12:36:00.630522 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-99148797-c105-4ab1-b0da-f16fe3a7c597) I0513 12:36:00.630530 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797) I0513 12:36:00.630497 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8cf53980-d45e-4b6c-af7c-261c23f7dea0"} I0513 12:36:00.630543 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797) since it's in attaching or detaching state I0513 12:36:00.630545 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8cf53980-d45e-4b6c-af7c-261c23f7dea0) I0513 12:36:00.630312 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:36:00.630441 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-add5c2e3-647a-4b8f-8e00-595d9e72f609"} I0513 12:36:00.630599 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-add5c2e3-647a-4b8f-8e00-595d9e72f609) I0513 12:36:00.630577 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6"} I0513 12:36:00.630613 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dc5febae-999f-4404-bdc8-a30e63b17ea6) I0513 12:36:00.630625 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:36:00.630638 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d2a88f5-b241-481f-bcc2-dfe481addba7"} I0513 12:36:00.630676 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-9d2a88f5-b241-481f-bcc2-dfe481addba7) I0513 12:36:00.630677 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:36:00.630689 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:36:00.630700 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:36:00.630601 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.77e-05 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797" result_code="failed" I0513 12:36:00.630691 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-79bf61a4-5f0d-46d5-9a46-76752bc5dbcd"} E0513 12:36:00.630716 1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797) since it's in attaching or detaching state I0513 12:36:00.630701 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-eb827c47-5607-4df5-8937-6a4070a86394"} I0513 12:36:00.630730 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-79bf61a4-5f0d-46d5-9a46-76752bc5dbcd) I0513 12:36:00.630738 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-eb827c47-5607-4df5-8937-6a4070a86394) I0513 12:36:00.630710 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e4f0190c-18ab-44a7-b9f2-2e1faea77bfe"} I0513 12:36:00.630758 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e4f0190c-18ab-44a7-b9f2-2e1faea77bfe) I0513 12:36:00.630676 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume ... skipping 94 lines ... I0513 12:36:35.321699 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:36:35.321728 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3","csi.storage.k8s.io/pvc/name":"test.csi.azure.com5zfv7-restored","csi.storage.k8s.io/pvc/namespace":"multivolume-1630","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3"} I0513 12:36:35.353911 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 to node k8s-agentpool1-19417709-vmss000002. I0513 12:36:35.353974 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 to node k8s-agentpool1-19417709-vmss000002 I0513 12:36:35.354009 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 false 1})] I0513 12:36:35.354061 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 false 1})]) I0513 12:36:35.557334 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:36:50.674485 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:36:50.674537 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:36:50.674583 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.32065432 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:36:50.674608 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:37:00.813147 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:37:00.813171 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-8e2b3efe-b76a-43d0-89d7-5e8b0aaad0ad"} ... skipping 44 lines ... I0513 12:37:36.727180 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 from node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:37:36.727211 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=35.907006011 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:37:36.727223 1 utils.go:84] GRPC response: {} I0513 12:37:36.727314 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:37:36.845740 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a false 0})] I0513 12:37:36.845802 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a false 0})]) I0513 12:37:37.069959 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:37:40.536717 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:37:40.536753 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797"} I0513 12:37:40.536860 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797) I0513 12:37:45.856643 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797 I0513 12:37:45.856687 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797) returned with <nil> I0513 12:37:45.856729 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.319842013 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-2563af11-64fc-4f30-a276-f4793b5a9797" result_code="succeeded" ... skipping 13 lines ... I0513 12:37:53.467761 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:37:53.467791 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf","csi.storage.k8s.io/pvc/name":"test.csi.azure.comckq4t","csi.storage.k8s.io/pvc/namespace":"provisioning-6993","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf"} I0513 12:37:53.503767 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf to node k8s-agentpool1-19417709-vmss000001. I0513 12:37:53.503828 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf to node k8s-agentpool1-19417709-vmss000001 I0513 12:37:53.503865 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf false 0})] I0513 12:37:53.503900 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf false 0})]) I0513 12:37:53.747476 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-4e1fd915-c650-44d0-812b-a17b0c229ecf false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:37:59.345155 1 utils.go:77] GRPC call: /csi.v1.Identity/GetPluginInfo I0513 12:37:59.345183 1 utils.go:78] GRPC request: {} I0513 12:37:59.345231 1 utils.go:84] GRPC response: {"name":"test.csi.azure.com","vendor_version":"v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987"} I0513 12:37:59.345587 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateSnapshot I0513 12:37:59.345607 1 utils.go:78] GRPC request: {"name":"snapshot-6107d2e1-d07a-4cd6-9a84-a147bac4b533","source_volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c9eb53cb-f038-4f5b-b82f-b5afdcc1942a"} I0513 12:37:59.345694 1 controllerserver.go:851] begin to create snapshot(snapshot-6107d2e1-d07a-4cd6-9a84-a147bac4b533, incremental: true) under rg(kubetest-s2gs5bqg) ... skipping 32 lines ... I0513 12:38:09.413000 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:38:09.413033 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d","csi.storage.k8s.io/pvc/name":"pvc-266dw","csi.storage.k8s.io/pvc/namespace":"provisioning-7473","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d"} I0513 12:38:09.443607 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d to node k8s-agentpool1-19417709-vmss000000. I0513 12:38:09.443675 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d to node k8s-agentpool1-19417709-vmss000000 I0513 12:38:09.443696 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d false 0})] I0513 12:38:09.443732 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d false 0})]) I0513 12:38:09.669545 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:38:09.823334 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3 I0513 12:38:09.823369 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3) returned with <nil> I0513 12:38:09.823398 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.279048241 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b8cd73ea-2f29-41fe-af1c-a4b37465ced3" result_code="succeeded" I0513 12:38:09.823411 1 utils.go:84] GRPC response: {} I0513 12:38:12.224092 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:38:12.224120 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c1647c56-84b0-49f9-9742-a093a0ce7b12"} ... skipping 50 lines ... I0513 12:38:34.181855 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea","csi.storage.k8s.io/pvc/name":"test.csi.azure.comxs4fs","csi.storage.k8s.io/pvc/namespace":"multivolume-7151","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea"} I0513 12:38:34.207888 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea to node k8s-agentpool1-19417709-vmss000002. I0513 12:38:34.207941 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:38:34.356720 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea to node k8s-agentpool1-19417709-vmss000002 I0513 12:38:34.356801 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea false 0})] I0513 12:38:34.356837 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea false 0})]) I0513 12:38:34.578552 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:38:40.929033 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:38:40.929059 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d"} I0513 12:38:40.929173 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d from node k8s-agentpool1-19417709-vmss000000 I0513 12:38:40.929237 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d from node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d:pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d] I0513 12:38:40.929264 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d I0513 12:38:40.929273 1 azure_controller_vmss.go:197] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d:pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d]) ... skipping 41 lines ... I0513 12:39:04.907857 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:39:04.907890 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-17412bfb-c18c-442a-9196-c754901e5987","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8gv26","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-5868","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987"} I0513 12:39:04.939850 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 to node k8s-agentpool1-19417709-vmss000001. I0513 12:39:04.939902 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 to node k8s-agentpool1-19417709-vmss000001 I0513 12:39:04.939926 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-17412bfb-c18c-442a-9196-c754901e5987 false 0})] I0513 12:39:04.939952 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-17412bfb-c18c-442a-9196-c754901e5987 false 0})]) I0513 12:39:05.137737 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-17412bfb-c18c-442a-9196-c754901e5987 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:39:10.389953 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:39:10.389988 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d"} I0513 12:39:10.390084 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-609638af-ec1b-4445-8c06-0f0f2cc4b85d) I0513 12:39:15.275240 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 attached to node k8s-agentpool1-19417709-vmss000001. I0513 12:39:15.275278 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 to node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:39:15.275311 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.335454172 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" ... skipping 33 lines ... I0513 12:39:34.127610 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-35003139-9b91-4f9c-9735-2fb20a9e3977","csi.storage.k8s.io/pvc/name":"test.csi.azure.combmvt6","csi.storage.k8s.io/pvc/namespace":"volume-1973","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977"} I0513 12:39:34.163493 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 to node k8s-agentpool1-19417709-vmss000001. I0513 12:39:34.163549 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:39:34.254225 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 to node k8s-agentpool1-19417709-vmss000001 I0513 12:39:34.254288 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 false 1})] I0513 12:39:34.254313 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 false 1})]) I0513 12:39:34.489197 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:39:34.626133 1 azure_controller_vmss.go:210] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - detach disk(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea:pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea]) returned with <nil> I0513 12:39:34.626195 1 azure_controller_common.go:365] azureDisk - detach disk(pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea) succeeded I0513 12:39:34.626210 1 controllerserver.go:453] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea from node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:39:34.626242 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.268023888 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-5544d007-a876-4111-8cdd-d05d4b2d6eea" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:39:34.626255 1 utils.go:84] GRPC response: {} I0513 12:39:34.835460 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume ... skipping 30 lines ... I0513 12:40:01.714855 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:40:01.714887 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-17412bfb-c18c-442a-9196-c754901e5987","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8gv26","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-5868","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987"} I0513 12:40:01.760351 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 to node k8s-agentpool1-19417709-vmss000000. I0513 12:40:01.760422 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 to node k8s-agentpool1-19417709-vmss000000 I0513 12:40:01.760448 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987 lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-17412bfb-c18c-442a-9196-c754901e5987 false 0})] I0513 12:40:01.760476 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-17412bfb-c18c-442a-9196-c754901e5987 false 0})]) I0513 12:40:02.049013 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-17412bfb-c18c-442a-9196-c754901e5987 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:40:02.931282 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:40:02.931308 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8l6ht","csi.storage.k8s.io/pvc/namespace":"multivolume-2690","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5"} I0513 12:40:02.958581 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 to node k8s-agentpool1-19417709-vmss000002. I0513 12:40:02.958641 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 to node k8s-agentpool1-19417709-vmss000002 I0513 12:40:02.958662 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 false 0})] I0513 12:40:02.958684 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 false 0})]) I0513 12:40:03.220154 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:40:05.253584 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:40:05.253613 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977"} I0513 12:40:05.253744 1 controllerserver.go:444] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 from node k8s-agentpool1-19417709-vmss000001 I0513 12:40:05.253780 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000001, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:40:05.313735 1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 from node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:pvc-35003139-9b91-4f9c-9735-2fb20a9e3977] I0513 12:40:05.313800 1 azure_controller_vmss.go:162] azureDisk - detach disk: name pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 uri /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 ... skipping 14 lines ... I0513 12:40:23.424443 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:40:23.424486 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:40:23.424520 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.465929156 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:40:23.424544 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 12:40:23.424594 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 false 1})] I0513 12:40:23.424657 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 false 1})]) I0513 12:40:23.686004 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:40:33.827835 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:40:33.827872 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:40:33.827908 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=12.128834248 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:40:33.827922 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:40:33.979017 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume I0513 12:40:33.979046 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-17412bfb-c18c-442a-9196-c754901e5987"} ... skipping 46 lines ... I0513 12:41:21.625856 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:41:21.625884 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-dced4abb-f48c-4338-99ca-45e87ace90b9","csi.storage.k8s.io/pvc/name":"test.csi.azure.com6gfdc","csi.storage.k8s.io/pvc/namespace":"volumemode-2998","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9"} I0513 12:41:21.680549 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9 to node k8s-agentpool1-19417709-vmss000001. I0513 12:41:21.680608 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9 to node k8s-agentpool1-19417709-vmss000001 I0513 12:41:21.680632 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9 lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dced4abb-f48c-4338-99ca-45e87ace90b9 false 0})] I0513 12:41:21.680655 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dced4abb-f48c-4338-99ca-45e87ace90b9 false 0})]) I0513 12:41:22.469936 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-dced4abb-f48c-4338-99ca-45e87ace90b9:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-dced4abb-f48c-4338-99ca-45e87ace90b9 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:41:23.840226 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977 I0513 12:41:23.840257 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977) returned with <nil> I0513 12:41:23.840284 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.24923115 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-35003139-9b91-4f9c-9735-2fb20a9e3977" result_code="succeeded" I0513 12:41:23.840299 1 utils.go:84] GRPC response: {} I0513 12:41:30.101426 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:41:30.101455 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080","parameters":{"csi.storage.k8s.io/pv/name":"pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080","csi.storage.k8s.io/pvc/name":"test.csi.azure.comxr658","csi.storage.k8s.io/pvc/namespace":"multivolume-5534"},"volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":7}}]} ... skipping 29 lines ... I0513 12:41:34.929309 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:41:34.929335 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080","csi.storage.k8s.io/pvc/name":"test.csi.azure.comxr658","csi.storage.k8s.io/pvc/namespace":"multivolume-5534","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080"} I0513 12:41:34.954789 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080 to node k8s-agentpool1-19417709-vmss000002. I0513 12:41:34.954844 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080 to node k8s-agentpool1-19417709-vmss000002 I0513 12:41:34.954867 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080 false 0})] I0513 12:41:34.954891 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080 false 0})]) I0513 12:41:35.145169 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-0828b3b1-a026-4dd8-b6ea-a6eef3184080 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:41:57.525791 1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume I0513 12:41:57.525814 1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5"} I0513 12:41:57.525883 1 controllerserver.go:299] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5) I0513 12:41:57.849384 1 azure_managedDiskController.go:303] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5 I0513 12:41:57.849426 1 controllerserver.go:301] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5) returned with <nil> I0513 12:41:57.849467 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.323557329 request="azuredisk_csi_driver_controller_delete_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-f37b59dd-f2e5-483f-ae42-753c0cfb0df5" result_code="succeeded" ... skipping 41 lines ... I0513 12:42:19.221400 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:42:19.221427 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c","csi.storage.k8s.io/pvc/name":"test.csi.azure.com2hc8x","csi.storage.k8s.io/pvc/namespace":"multivolume-1811","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c"} I0513 12:42:19.277096 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c to node k8s-agentpool1-19417709-vmss000002. I0513 12:42:19.277152 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c to node k8s-agentpool1-19417709-vmss000002 I0513 12:42:19.277178 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c false 1})] I0513 12:42:19.277203 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c false 1})]) I0513 12:42:19.454712 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:42:24.357605 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:42:24.357630 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891","csi.storage.k8s.io/pvc/name":"test.csi.azure.comxr658-cloned","csi.storage.k8s.io/pvc/namespace":"multivolume-5534","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891"} I0513 12:42:24.407016 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 to node k8s-agentpool1-19417709-vmss000002. I0513 12:42:24.407069 1 azure_vmss.go:204] Couldn't find VMSS VM with nodeName k8s-agentpool1-19417709-vmss000002, refreshing the cache(vmss: k8s-agentpool1-19417709-vmss, rg: kubetest-s2gs5bqg) I0513 12:42:24.513319 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 to node k8s-agentpool1-19417709-vmss000002 I0513 12:42:29.580060 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:42:29.580112 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:42:29.580149 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.303038834 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e60532cc-4971-4a63-83d0-e06e4de9db1c" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:42:29.580166 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:42:29.580189 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 lun 2 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 false 2})] I0513 12:42:29.580310 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 false 2})]) I0513 12:42:29.799530 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:42:44.948817 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:42:44.948865 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:42:44.948903 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=20.541871098 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:42:44.948918 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"2"}} I0513 12:42:44.956729 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:42:44.956766 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891","csi.storage.k8s.io/pvc/name":"test.csi.azure.comxr658-cloned","csi.storage.k8s.io/pvc/namespace":"multivolume-5534","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-b107b2bf-5fb7-4059-99bd-87021c2cc891"} ... skipping 80 lines ... I0513 12:44:15.669908 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:44:15.669939 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc"} I0513 12:44:15.706861 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc to node k8s-agentpool1-19417709-vmss000001. I0513 12:44:15.706941 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc to node k8s-agentpool1-19417709-vmss000001 I0513 12:44:15.706973 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc lun 0 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc false 0})] I0513 12:44:15.707006 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc false 0})]) I0513 12:44:15.934623 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:44:26.053521 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc attached to node k8s-agentpool1-19417709-vmss000001. I0513 12:44:26.053558 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc to node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:44:26.053591 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.346717376 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-306b25bc-2ba3-4bef-8b5d-4b2e363739dc" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 12:44:26.053604 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 12:44:32.696989 1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume I0513 12:44:32.697018 1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.test.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00","parameters":{"csi.storage.k8s.io/pv/name":"pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","skuName":"StandardSSD_LRS"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]} ... skipping 6 lines ... I0513 12:44:35.816494 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:44:35.816525 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00"} I0513 12:44:35.845029 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 to node k8s-agentpool1-19417709-vmss000002. I0513 12:44:35.845093 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 to node k8s-agentpool1-19417709-vmss000002 I0513 12:44:35.845128 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 lun 0 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 false 0})] I0513 12:44:35.845170 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 false 0})]) I0513 12:44:36.166232 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:44:46.283570 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:44:46.283609 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00 to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:44:46.283641 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.438602372 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:44:46.283653 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 12:44:46.299103 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:44:46.299134 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a82fe0dc-3f57-484d-9a21-a4cbc0f7ab00"} ... skipping 14 lines ... I0513 12:44:58.575575 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:44:58.575608 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf"} I0513 12:44:58.599667 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf to node k8s-agentpool1-19417709-vmss000000. I0513 12:44:58.599723 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf to node k8s-agentpool1-19417709-vmss000000 I0513 12:44:58.599749 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf lun 0 to node k8s-agentpool1-19417709-vmss000000, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf false 0})] I0513 12:44:58.599776 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf false 0})]) I0513 12:44:58.834435 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf false 0})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:45:08.952917 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf attached to node k8s-agentpool1-19417709-vmss000000. I0513 12:45:08.952961 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf to node k8s-agentpool1-19417709-vmss000000 successfully I0513 12:45:08.952998 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=10.353317969999999 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf" node="k8s-agentpool1-19417709-vmss000000" result_code="succeeded" I0513 12:45:08.953015 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"0"}} I0513 12:45:08.967902 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:45:08.967928 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000000","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-nonroot-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bde82a81-36d0-4bba-9e75-3f1e3346c5cf"} ... skipping 14 lines ... I0513 12:45:22.647903 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:45:22.647934 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e","csi.storage.k8s.io/pvc/name":"nginx-azuredisk-ephemeral-azuredisk01","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e"} I0513 12:45:22.683914 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e to node k8s-agentpool1-19417709-vmss000002. I0513 12:45:22.683982 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e to node k8s-agentpool1-19417709-vmss000002 I0513 12:45:22.684008 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e lun 1 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e false 1})] I0513 12:45:22.684043 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e false 1})]) I0513 12:45:22.974347 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:45:38.115639 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e attached to node k8s-agentpool1-19417709-vmss000002. I0513 12:45:38.115678 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e to node k8s-agentpool1-19417709-vmss000002 successfully I0513 12:45:38.115712 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.431786803 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e" node="k8s-agentpool1-19417709-vmss000002" result_code="succeeded" I0513 12:45:38.115724 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:45:38.124326 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:45:38.124356 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e","csi.storage.k8s.io/pvc/name":"nginx-azuredisk-ephemeral-azuredisk01","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-c85a1454-ce39-49ef-b99e-6dc7e22a9a4e"} ... skipping 22 lines ... I0513 12:45:47.120166 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:45:47.120201 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a64a3053-d392-4351-80af-45d7deda3dfb","csi.storage.k8s.io/pvc/name":"daemonset-azuredisk-ephemeral-lwsd5-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb"} I0513 12:45:47.157888 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb to node k8s-agentpool1-19417709-vmss000001. I0513 12:45:47.157947 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb to node k8s-agentpool1-19417709-vmss000001 I0513 12:45:47.157972 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb lun 1 to node k8s-agentpool1-19417709-vmss000001, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a64a3053-d392-4351-80af-45d7deda3dfb false 1})] I0513 12:45:47.157998 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a64a3053-d392-4351-80af-45d7deda3dfb false 1})]) I0513 12:45:47.348110 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000001) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-a64a3053-d392-4351-80af-45d7deda3dfb false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:45:47.965750 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b StorageAccountType:StandardSSD_LRS Size:10 I0513 12:45:47.965807 1 controllerserver.go:258] create azure disk(pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b kubernetes.io-created-for-pvc-name:daemonset-azuredisk-ephemeral-npsp9-azuredisk kubernetes.io-created-for-pvc-namespace:default]) successfully I0513 12:45:47.965846 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=2.359337789 request="azuredisk_csi_driver_controller_create_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b" result_code="succeeded" I0513 12:45:47.965858 1 utils.go:84] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.test.csi.azure.com/zone":""}}],"capacity_bytes":10737418240,"content_source":{"Type":null},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b","csi.storage.k8s.io/pvc/name":"daemonset-azuredisk-ephemeral-npsp9-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b"}} I0513 12:45:48.051886 1 azure_managedDiskController.go:266] azureDisk - created new MD Name:pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 StorageAccountType:StandardSSD_LRS Size:10 I0513 12:45:48.051936 1 controllerserver.go:258] create azure disk(pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72) account type(StandardSSD_LRS) rg(kubetest-s2gs5bqg) location(westeurope) size(10) tags(map[kubernetes.io-created-for-pv-name:pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 kubernetes.io-created-for-pvc-name:daemonset-azuredisk-ephemeral-ggc7h-azuredisk kubernetes.io-created-for-pvc-namespace:default]) successfully ... skipping 8 lines ... I0513 12:45:48.729034 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:45:48.729064 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000002","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72","csi.storage.k8s.io/pvc/name":"daemonset-azuredisk-ephemeral-ggc7h-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72"} I0513 12:45:48.787843 1 controllerserver.go:355] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 to node k8s-agentpool1-19417709-vmss000002. I0513 12:45:48.787893 1 controllerserver.go:381] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 to node k8s-agentpool1-19417709-vmss000002 I0513 12:45:48.787913 1 azure_controller_common.go:235] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 lun 2 to node k8s-agentpool1-19417709-vmss000002, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 false 2})] I0513 12:45:48.787960 1 azure_controller_vmss.go:109] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 false 2})]) I0513 12:45:48.872570 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000000) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b false 1})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:45:48.978148 1 azure_controller_vmss.go:121] azureDisk - update(kubetest-s2gs5bqg): vm(k8s-agentpool1-19417709-vmss000002) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/kubetest-s2gs5bqg/providers/microsoft.compute/disks/pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72:%!s(*provider.AttachDiskOptions=&{ReadOnly pvc-bb9f6f90-a1b5-4c9f-a32a-870176d25a72 false 2})], %!s(*retry.Error=<nil>)) returned with %!v(MISSING) I0513 12:46:02.478473 1 controllerserver.go:386] Attach operation successful: volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb attached to node k8s-agentpool1-19417709-vmss000001. I0513 12:46:02.478506 1 controllerserver.go:406] attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb to node k8s-agentpool1-19417709-vmss000001 successfully I0513 12:46:02.478538 1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.320636675 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-s2gs5bqg" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="test.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb" node="k8s-agentpool1-19417709-vmss000001" result_code="succeeded" I0513 12:46:02.478551 1 utils.go:84] GRPC response: {"publish_context":{"LUN":"1"}} I0513 12:46:02.485504 1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume I0513 12:46:02.485529 1 utils.go:78] GRPC request: {"node_id":"k8s-agentpool1-19417709-vmss000001","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-a64a3053-d392-4351-80af-45d7deda3dfb","csi.storage.k8s.io/pvc/name":"daemonset-azuredisk-ephemeral-lwsd5-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1652442660503-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-s2gs5bqg/providers/Microsoft.Compute/disks/pvc-a64a3053-d392-4351-80af-45d7deda3dfb"} ... skipping 27 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 11:50:51.409680 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 11:50:51.410071 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 11:50:51.434147 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 11:50:51.434174 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 11:50:51.434184 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 11:50:51.434209 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 11:50:51.434989 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 11:50:51.435005 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 11:50:51.435009 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 11:50:51.435037 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for acb72a6f-de77-4cc8-9b84-00401d3cb401. Invalid resource Id format I0513 11:50:51.435073 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 11:50:51.435108 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:51.435113 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:51.435124 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:51.435128 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:51.435144 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 63 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 11:50:54.923233 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 11:50:54.923657 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 11:50:54.940641 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 11:50:54.940664 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 11:50:54.940671 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 11:50:54.940702 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 11:50:54.941297 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 11:50:54.941324 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 11:50:54.941328 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 11:50:54.941363 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for acb72a6f-de77-4cc8-9b84-00401d3cb401. Invalid resource Id format I0513 11:50:54.941405 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 11:50:54.941466 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:54.941480 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:54.941496 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:54.941507 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:54.941521 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 3425 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 11:50:53.867042 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 11:50:53.867448 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 11:50:53.887128 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 11:50:53.887156 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 11:50:53.887165 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 11:50:53.887199 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 11:50:53.888875 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 11:50:53.888910 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 11:50:53.888920 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 11:50:53.888973 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for acb72a6f-de77-4cc8-9b84-00401d3cb401. Invalid resource Id format I0513 11:50:53.889036 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 11:50:53.889117 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:53.889132 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:53.889150 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:53.889159 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:53.889185 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 2984 lines ... Platform: linux/amd64 Topology Key: topology.test.csi.azure.com/zone Streaming logs below: I0513 11:50:56.235971 1 azuredisk.go:171] driver userAgent: test.csi.azure.com/v1.19.0-9480cc27b0ee3e0de9a15e6967f197e793523987 gc/go1.18.1 (amd64-linux) e2e-test I0513 11:50:56.236315 1 azure_disk_utils.go:159] reading cloud config from secret kube-system/azure-cloud-provider W0513 11:50:56.255451 1 azure_disk_utils.go:166] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0513 11:50:56.255471 1 azure_disk_utils.go:171] could not read cloud config from secret kube-system/azure-cloud-provider I0513 11:50:56.255478 1 azure_disk_utils.go:181] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0513 11:50:56.255498 1 azure_disk_utils.go:189] read cloud config from file: /etc/kubernetes/azure.json successfully I0513 11:50:56.257015 1 azure_auth.go:245] Using AzurePublicCloud environment I0513 11:50:56.257045 1 azure_auth.go:96] azure: using managed identity extension to retrieve access token I0513 11:50:56.257052 1 azure_auth.go:102] azure: using User Assigned MSI ID to retrieve access token I0513 11:50:56.257085 1 azure_auth.go:113] azure: User Assigned MSI ID is client ID. Resource ID parsing error: %+vparsing failed for acb72a6f-de77-4cc8-9b84-00401d3cb401. Invalid resource Id format I0513 11:50:56.257119 1 azure.go:763] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0513 11:50:56.257151 1 azure_interfaceclient.go:70] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:56.257163 1 azure_interfaceclient.go:73] Azure InterfacesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:56.257174 1 azure_vmsizeclient.go:68] Azure VirtualMachineSizesClient (read ops) using rate limit config: QPS=6, bucket=20 I0513 11:50:56.257179 1 azure_vmsizeclient.go:71] Azure VirtualMachineSizesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0513 11:50:56.257191 1 azure_storageaccountclient.go:69] Azure StorageAccountClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 2425 lines ... I0513 12:46:05.973510 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b/globalmount /var/lib/kubelet/pods/04eb0633-17df-4bcd-80dd-80f9900419a4/volumes/kubernetes.io~csi/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b/mount) I0513 12:46:05.974873 1 nodeserver.go:286] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b/globalmount at /var/lib/kubelet/pods/04eb0633-17df-4bcd-80dd-80f9900419a4/volumes/kubernetes.io~csi/pvc-e38fd4f7-ae6b-4f0b-b2c1-e33b984a7c3b/mount successfully I0513 12:46:05.974892 1 utils.go:84] GRPC response: {} print out csi-test-node-win logs ... ====================================================================================== No resources found in kube-system namespace. make: *** [Makefile:260: e2e-test] Error 1 2022/05/13 12:46:18 process.go:155: Step 'make e2e-test' finished in 56m17.379329693s 2022/05/13 12:46:18 aksengine_helpers.go:426: downloading /root/tmp1431985631/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2022/05/13 12:46:18 util.go:71: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2022/05/13 12:46:18 process.go:153: Running: chmod +x /root/tmp1431985631/log-dump.sh 2022/05/13 12:46:18 process.go:155: Step 'chmod +x /root/tmp1431985631/log-dump.sh' finished in 1.078135ms 2022/05/13 12:46:18 aksengine_helpers.go:426: downloading /root/tmp1431985631/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 64 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2022/05/13 12:47:18 process.go:155: Step 'bash -c /root/tmp1431985631/win-ci-logs-collector.sh kubetest-s2gs5bqg.westeurope.cloudapp.azure.com /root/tmp1431985631 /root/.ssh/id_rsa' finished in 3.336548ms 2022/05/13 12:47:18 aksengine.go:1141: Deleting resource group: kubetest-s2gs5bqg. 2022/05/13 12:55:30 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2022/05/13 12:55:30 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2022/05/13 12:55:30 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 271.648183ms 2022/05/13 12:55:30 main.go:331: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...