Recent runs || View in Spyglass
PR | hccheng72: [V2] fix: only count replica attachments w/t deletion timestamp from cache client |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 1h19m |
Revision | 23b9a8b4a23ed46f3c0622bc10b1ef0956cdbc9e |
Refs |
1691 |
job-version | v1.27.0-alpha.0.983+2ca95b4df908dc |
kubetest-version | v20230111-cd1b3caf9c |
revision | v1.27.0-alpha.0.983+2ca95b4df908dc |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 244 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 124k 0 --:--:-- --:--:-- --:--:-- 124k Downloading https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azuredisk-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azuredisk-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 not found: manifest unknown: manifest tagged by "latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.driverVersion=latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.gitCommit=9ef068a8cb36a997d4ea04b90c05c6f92a488a19 -X sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.buildDate=2023-01-14T00:07:40Z -extldflags "-static"" -tags azurediskv2 -mod vendor -o _output/amd64/azurediskpluginv2.exe ./pkg/azurediskplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 659 lines ... } } ] } make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' docker pull k8sprow.azurecr.io/azdiskschedulerextender-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 || make azdiskschedulerextender-all push-manifest-azdiskschedulerextender Error response from daemon: manifest for k8sprow.azurecr.io/azdiskschedulerextender-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 not found: manifest unknown: manifest tagged by "latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver' docker buildx rm container-builder || true container-builder removed docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build ... skipping 1007 lines ... type: string type: object oneOf: - required: ["persistentVolumeClaimName"] - required: ["volumeSnapshotContentName"] volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 60 lines ... type: string volumeSnapshotContentName: description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. type: string type: object volumeSnapshotClassName: description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' type: string required: - source type: object status: description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. ... skipping 2 lines ... description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' type: string creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. format: date-time type: string error: description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: type: string description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ x-kubernetes-int-or-string: true type: object required: - spec type: object ... skipping 254 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 108 lines ... description: status represents the current information of a snapshot. properties: creationTime: description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. format: int64 type: integer error: description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. properties: message: description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' type: string time: description: time is the timestamp when the error was encountered. format: date-time type: string type: object readyToUse: description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. type: boolean restoreSize: description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. format: int64 minimum: 0 type: integer snapshotHandle: description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. type: string type: object required: - spec type: object served: true ... skipping 359 lines ... - volumeName - volume_context - volume_id type: object status: description: status represents the current state of AzVolumeAttachment. includes error, state, and attachment status Required properties: detail: description: Status summarizes the current attachment state of the volume attachment Nil Status indicates that the volume has not yet been attached to the node properties: ... skipping 7 lines ... role: description: The current attachment role. type: string required: - role type: object error: description: Error occurred during attach/detach of volume properties: code: type: string message: type: string parameters: ... skipping 90 lines ... - volumeName - volume_context - volume_id type: object status: description: status represents the current state of AzVolumeAttachment. includes error, state, and attachment status properties: annotation: additionalProperties: type: string description: Annotations contains additional resource information to guide driver actions ... skipping 13 lines ... role: description: The current attachment role. type: string required: - role type: object error: description: Error occurred during attach/detach of volume properties: code: type: string message: type: string parameters: ... skipping 169 lines ... - maxMountReplicaCount - volumeCapability - volumeName type: object status: description: status represents the current state of AzVolume. includes error, state, and volume status properties: detail: description: Current status detail of the AzVolume Nil detail indicates that the volume has not been created properties: accessible_topology: ... skipping 28 lines ... type: string required: - capacity_bytes - node_expansion_required - volume_id type: object error: description: Error occurred during creation/deletion of volume properties: code: type: string message: type: string parameters: ... skipping 154 lines ... - maxMountReplicaCount - volumeCapability - volumeName type: object status: description: status represents the current state of AzVolume. includes error, state, and volume status properties: annotation: additionalProperties: type: string description: Annotations contains additional resource information to guide driver actions ... skipping 34 lines ... type: string required: - capacity_bytes - node_expansion_required - volume_id type: object error: description: Error occurred during creation/deletion of volume properties: code: type: string message: type: string parameters: ... skipping 1069 lines ... image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0" args: - "-csi-address=$(ADDRESS)" - "-v=2" - "-leader-election" - "--leader-election-namespace=kube-system" - '-handle-volume-inuse-error=false' - '-feature-gates=RecoverVolumeExpansionFailure=true' - "-timeout=240s" env: - name: ADDRESS value: /csi/csi.sock volumeMounts: ... skipping 396 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 297 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":28,"completed":1,"skipped":130,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:22:21.638: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 62 lines ... Jan 14 00:21:21.620: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comsn6h2] to have phase Bound Jan 14 00:21:21.727: INFO: PersistentVolumeClaim test.csi.azure.comsn6h2 found but phase is Pending instead of Bound. Jan 14 00:21:23.837: INFO: PersistentVolumeClaim test.csi.azure.comsn6h2 found but phase is Pending instead of Bound. Jan 14 00:21:25.946: INFO: PersistentVolumeClaim test.csi.azure.comsn6h2 found and phase=Bound (4.325772319s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-s9r5 [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:21:26.271: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-s9r5" in namespace "provisioning-6130" to be "Succeeded or Failed" Jan 14 00:21:26.379: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 106.99074ms Jan 14 00:21:28.487: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215329249s Jan 14 00:21:30.602: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330208137s Jan 14 00:21:32.710: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438904098s Jan 14 00:21:34.819: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547973118s Jan 14 00:21:36.928: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656657412s ... skipping 2 lines ... Jan 14 00:21:43.255: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.983365189s Jan 14 00:21:45.363: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.091753086s Jan 14 00:21:47.472: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.200288932s Jan 14 00:21:49.580: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.308824116s Jan 14 00:21:51.688: INFO: Pod "pod-subpath-test-dynamicpv-s9r5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.41606542s [1mSTEP[0m: Saw pod success Jan 14 00:21:51.688: INFO: Pod "pod-subpath-test-dynamicpv-s9r5" satisfied condition "Succeeded or Failed" Jan 14 00:21:51.794: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000001 pod pod-subpath-test-dynamicpv-s9r5 container test-container-subpath-dynamicpv-s9r5: <nil> [1mSTEP[0m: delete the pod Jan 14 00:21:52.017: INFO: Waiting for pod pod-subpath-test-dynamicpv-s9r5 to disappear Jan 14 00:21:52.124: INFO: Pod pod-subpath-test-dynamicpv-s9r5 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-s9r5 Jan 14 00:21:52.124: INFO: Deleting pod "pod-subpath-test-dynamicpv-s9r5" in namespace "provisioning-6130" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing single file [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:221[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":35,"completed":1,"skipped":33,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:22:34.047: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 124 lines ... Jan 14 00:21:21.141: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 14 00:21:21.250: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.combmcgq] to have phase Bound Jan 14 00:21:21.357: INFO: PersistentVolumeClaim test.csi.azure.combmcgq found but phase is Pending instead of Bound. Jan 14 00:21:23.465: INFO: PersistentVolumeClaim test.csi.azure.combmcgq found but phase is Pending instead of Bound. Jan 14 00:21:25.573: INFO: PersistentVolumeClaim test.csi.azure.combmcgq found and phase=Bound (4.323050296s) [1mSTEP[0m: Creating pod to format volume volume-prep-provisioning-6443 Jan 14 00:21:25.896: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-6443" in namespace "provisioning-6443" to be "Succeeded or Failed" Jan 14 00:21:26.003: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 107.222252ms Jan 14 00:21:28.111: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215049819s Jan 14 00:21:30.219: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322662341s Jan 14 00:21:32.328: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431263305s Jan 14 00:21:34.435: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538828358s Jan 14 00:21:36.544: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 10.647991678s Jan 14 00:21:38.654: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757426304s Jan 14 00:21:40.762: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 14.865862161s Jan 14 00:21:42.871: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 16.974942277s Jan 14 00:21:44.981: INFO: Pod "volume-prep-provisioning-6443": Phase="Pending", Reason="", readiness=false. Elapsed: 19.085080869s Jan 14 00:21:47.090: INFO: Pod "volume-prep-provisioning-6443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.193646971s [1mSTEP[0m: Saw pod success Jan 14 00:21:47.090: INFO: Pod "volume-prep-provisioning-6443" satisfied condition "Succeeded or Failed" Jan 14 00:21:47.090: INFO: Deleting pod "volume-prep-provisioning-6443" in namespace "provisioning-6443" Jan 14 00:21:47.200: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-6443" to be fully deleted [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-hkt9 [1mSTEP[0m: Checking for subpath error in container status Jan 14 00:21:57.637: INFO: Deleting pod "pod-subpath-test-dynamicpv-hkt9" in namespace "provisioning-6443" Jan 14 00:21:57.747: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-hkt9" to be fully deleted [1mSTEP[0m: Deleting pod Jan 14 00:21:59.965: INFO: Deleting pod "pod-subpath-test-dynamicpv-hkt9" in namespace "provisioning-6443" [1mSTEP[0m: Deleting pvc Jan 14 00:22:00.073: INFO: Deleting PersistentVolumeClaim "test.csi.azure.combmcgq" ... skipping 19 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should verify container cannot write to subpath readonly volumes [Slow] [90mtest/e2e/storage/testsuites/subpath.go:425[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]","total":34,"completed":1,"skipped":12,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:22:41.716: INFO: Distro debian doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/framework/framework.go:188 ... skipping 35 lines ... Jan 14 00:21:22.926: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comxf7xp] to have phase Bound Jan 14 00:21:23.033: INFO: PersistentVolumeClaim test.csi.azure.comxf7xp found but phase is Pending instead of Bound. Jan 14 00:21:25.142: INFO: PersistentVolumeClaim test.csi.azure.comxf7xp found but phase is Pending instead of Bound. Jan 14 00:21:27.250: INFO: PersistentVolumeClaim test.csi.azure.comxf7xp found and phase=Bound (4.323869041s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-f7cm [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:21:27.573: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-f7cm" in namespace "provisioning-9322" to be "Succeeded or Failed" Jan 14 00:21:27.681: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 107.265955ms Jan 14 00:21:29.789: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2160553s Jan 14 00:21:31.899: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32581966s Jan 14 00:21:34.007: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434239401s Jan 14 00:21:36.115: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542198495s Jan 14 00:21:38.225: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651843399s ... skipping 8 lines ... Jan 14 00:21:57.213: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 29.639612787s Jan 14 00:21:59.323: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 31.749311812s Jan 14 00:22:01.430: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 33.857046639s Jan 14 00:22:03.539: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Pending", Reason="", readiness=false. Elapsed: 35.965362696s Jan 14 00:22:05.647: INFO: Pod "pod-subpath-test-dynamicpv-f7cm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.073483482s [1mSTEP[0m: Saw pod success Jan 14 00:22:05.647: INFO: Pod "pod-subpath-test-dynamicpv-f7cm" satisfied condition "Succeeded or Failed" Jan 14 00:22:05.754: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000000 pod pod-subpath-test-dynamicpv-f7cm container test-container-subpath-dynamicpv-f7cm: <nil> [1mSTEP[0m: delete the pod Jan 14 00:22:06.000: INFO: Waiting for pod pod-subpath-test-dynamicpv-f7cm to disappear Jan 14 00:22:06.108: INFO: Pod pod-subpath-test-dynamicpv-f7cm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-f7cm Jan 14 00:22:06.108: INFO: Deleting pod "pod-subpath-test-dynamicpv-f7cm" in namespace "provisioning-9322" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support readOnly directory specified in the volumeMount [90mtest/e2e/storage/testsuites/subpath.go:367[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":34,"completed":1,"skipped":143,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:23:18.647: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 81 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":31,"completed":1,"skipped":89,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:23:32.438: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 111 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should not mount / map unused volumes in a pod [LinuxOnly] [90mtest/e2e/storage/testsuites/volumemode.go:354[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":28,"completed":2,"skipped":267,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 111 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":33,"completed":1,"skipped":157,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 276 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":35,"completed":1,"skipped":78,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology ... skipping 220 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":35,"completed":2,"skipped":213,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:25:45.082: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json ... skipping 194 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":34,"completed":2,"skipped":76,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:26:08.918: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 133 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data","total":28,"completed":3,"skipped":272,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:278[0m ... skipping 17 lines ... Jan 14 00:23:33.596: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrtc5h] to have phase Bound Jan 14 00:23:33.703: INFO: PersistentVolumeClaim test.csi.azure.comrtc5h found but phase is Pending instead of Bound. Jan 14 00:23:35.812: INFO: PersistentVolumeClaim test.csi.azure.comrtc5h found but phase is Pending instead of Bound. Jan 14 00:23:37.920: INFO: PersistentVolumeClaim test.csi.azure.comrtc5h found and phase=Bound (4.324145698s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success Jan 14 00:23:38.351: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-mdx5k" in namespace "snapshotting-8837" to be "Succeeded or Failed" Jan 14 00:23:38.458: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 106.987081ms Jan 14 00:23:40.565: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214103703s Jan 14 00:23:42.674: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322960042s Jan 14 00:23:44.781: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430810534s Jan 14 00:23:46.890: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539426s Jan 14 00:23:48.998: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.647535593s ... skipping 8 lines ... Jan 14 00:24:07.981: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 29.630862768s Jan 14 00:24:10.090: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 31.739332155s Jan 14 00:24:12.197: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 33.846751271s Jan 14 00:24:14.308: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Pending", Reason="", readiness=false. Elapsed: 35.957226439s Jan 14 00:24:16.417: INFO: Pod "pvc-snapshottable-tester-mdx5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.066330401s [1mSTEP[0m: Saw pod success Jan 14 00:24:16.417: INFO: Pod "pvc-snapshottable-tester-mdx5k" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim Jan 14 00:24:16.524: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrtc5h] to have phase Bound Jan 14 00:24:16.632: INFO: PersistentVolumeClaim test.csi.azure.comrtc5h found and phase=Bound (107.691885ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod Jan 14 00:24:16.977: INFO: Pod pvc-snapshottable-tester-mdx5k has the following logs: ... skipping 16 lines ... Jan 14 00:24:31.027: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-1fc30168-62bc-4a65-8bd8-0b9bc1721e68 creationTime:2023-01-14T00:24:26Z readyToUse:true restoreSize:5Gi] Jan 14 00:24:31.027: INFO: snapshotContentName snapcontent-1fc30168-62bc-4a65-8bd8-0b9bc1721e68 [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC Jan 14 00:24:31.470: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-n56wt" in namespace "snapshotting-8837" to be "Succeeded or Failed" Jan 14 00:24:31.577: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 106.890739ms Jan 14 00:24:33.687: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216370382s Jan 14 00:24:35.795: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32425504s Jan 14 00:24:37.903: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432534712s Jan 14 00:24:40.011: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540431s Jan 14 00:24:42.118: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.647883512s ... skipping 17 lines ... Jan 14 00:25:20.078: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 48.607255562s Jan 14 00:25:22.188: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 50.717855647s Jan 14 00:25:24.297: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 52.826365627s Jan 14 00:25:26.406: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Pending", Reason="", readiness=false. Elapsed: 54.935188311s Jan 14 00:25:28.514: INFO: Pod "pvc-snapshottable-data-tester-n56wt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 57.043151088s [1mSTEP[0m: Saw pod success Jan 14 00:25:28.514: INFO: Pod "pvc-snapshottable-data-tester-n56wt" satisfied condition "Succeeded or Failed" Jan 14 00:25:28.733: INFO: Pod pvc-snapshottable-data-tester-n56wt has the following logs: Jan 14 00:25:28.733: INFO: Deleting pod "pvc-snapshottable-data-tester-n56wt" in namespace "snapshotting-8837" Jan 14 00:25:28.845: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-n56wt" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot Jan 14 00:25:49.391: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rpwnaldb.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-8837 exec restored-pvc-tester-kdm7l --namespace=snapshotting-8837 -- cat /mnt/test/data' ... skipping 47 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":31,"completed":2,"skipped":221,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould concurrently access the single volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:298[0m ... skipping 148 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":35,"completed":2,"skipped":280,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] ... skipping 122 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support restarting containers using file as subpath [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:333[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]","total":33,"completed":2,"skipped":295,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:26:54.137: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 97 lines ... test/e2e/storage/external/external.go:262 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37mtest/e2e/storage/testsuites/volumemode.go:299[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:26:08.965: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] test/e2e/storage/testsuites/volumemode.go:299 Jan 14 00:26:09.720: INFO: Creating resource for dynamic PV Jan 14 00:26:09.720: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-4974-e2e-sczrz66 [1mSTEP[0m: creating a claim Jan 14 00:26:09.939: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comdlbpl] to have phase Bound Jan 14 00:26:10.047: INFO: PersistentVolumeClaim test.csi.azure.comdlbpl found but phase is Pending instead of Bound. Jan 14 00:26:12.154: INFO: PersistentVolumeClaim test.csi.azure.comdlbpl found but phase is Pending instead of Bound. Jan 14 00:26:14.262: INFO: PersistentVolumeClaim test.csi.azure.comdlbpl found and phase=Bound (4.322915755s) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jan 14 00:26:16.922: INFO: Deleting pod "pod-4d235ad9-6c30-4456-abe7-8461b2bdbb62" in namespace "volumemode-4974" Jan 14 00:26:17.031: INFO: Wait up to 5m0s for pod "pod-4d235ad9-6c30-4456-abe7-8461b2bdbb62" to be fully deleted [1mSTEP[0m: Deleting pvc Jan 14 00:26:19.248: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comdlbpl" Jan 14 00:26:19.357: INFO: Waiting up to 5m0s for PersistentVolume pvc-b1c1d421-c841-4bdc-80b7-6f2d2e50f922 to get deleted Jan 14 00:26:19.465: INFO: PersistentVolume pvc-b1c1d421-c841-4bdc-80b7-6f2d2e50f922 found and phase=Released (107.476311ms) ... skipping 20 lines ... [32m• [SLOW TEST:82.571 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":34,"completed":3,"skipped":148,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:27:31.551: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 29 lines ... test/e2e/storage/testsuites/provisioning.go:189 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:269[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:26:35.549: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:269 Jan 14 00:26:36.299: INFO: Creating resource for dynamic PV Jan 14 00:26:36.299: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-6002-e2e-scgnlzr [1mSTEP[0m: creating a claim Jan 14 00:26:36.407: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 14 00:26:36.521: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comff2js] to have phase Bound Jan 14 00:26:36.628: INFO: PersistentVolumeClaim test.csi.azure.comff2js found but phase is Pending instead of Bound. Jan 14 00:26:38.736: INFO: PersistentVolumeClaim test.csi.azure.comff2js found but phase is Pending instead of Bound. Jan 14 00:26:40.844: INFO: PersistentVolumeClaim test.csi.azure.comff2js found and phase=Bound (4.323651384s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-v969 [1mSTEP[0m: Checking for subpath error in container status Jan 14 00:27:03.395: INFO: Deleting pod "pod-subpath-test-dynamicpv-v969" in namespace "provisioning-6002" Jan 14 00:27:03.505: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-v969" to be fully deleted [1mSTEP[0m: Deleting pod Jan 14 00:27:05.720: INFO: Deleting pod "pod-subpath-test-dynamicpv-v969" in namespace "provisioning-6002" [1mSTEP[0m: Deleting pvc Jan 14 00:27:05.827: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comff2js" ... skipping 16 lines ... [32m• [SLOW TEST:71.894 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:269[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]","total":31,"completed":3,"skipped":223,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:248[0m ... skipping 189 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":35,"completed":3,"skipped":385,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:28:55.598: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 141 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":28,"completed":4,"skipped":276,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:29:36.817: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 111 lines ... test/e2e/framework/framework.go:188 Jan 14 00:29:39.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-6231" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","total":28,"completed":5,"skipped":437,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... Jan 14 00:28:56.667: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com6cn8c] to have phase Bound Jan 14 00:28:56.774: INFO: PersistentVolumeClaim test.csi.azure.com6cn8c found but phase is Pending instead of Bound. Jan 14 00:28:58.882: INFO: PersistentVolumeClaim test.csi.azure.com6cn8c found but phase is Pending instead of Bound. Jan 14 00:29:00.990: INFO: PersistentVolumeClaim test.csi.azure.com6cn8c found and phase=Bound (4.322814043s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-mlzs [1mSTEP[0m: Creating a pod to test exec-volume-test Jan 14 00:29:01.314: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-mlzs" in namespace "volume-6656" to be "Succeeded or Failed" Jan 14 00:29:01.420: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 106.465953ms Jan 14 00:29:03.528: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214475771s Jan 14 00:29:05.638: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323918849s Jan 14 00:29:07.745: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431857301s Jan 14 00:29:09.853: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538915199s Jan 14 00:29:11.960: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646225725s Jan 14 00:29:14.068: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754593345s Jan 14 00:29:16.175: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.861506599s Jan 14 00:29:18.282: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.968629622s Jan 14 00:29:20.390: INFO: Pod "exec-volume-test-dynamicpv-mlzs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.076730173s [1mSTEP[0m: Saw pod success Jan 14 00:29:20.390: INFO: Pod "exec-volume-test-dynamicpv-mlzs" satisfied condition "Succeeded or Failed" Jan 14 00:29:20.498: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000002 pod exec-volume-test-dynamicpv-mlzs container exec-container-dynamicpv-mlzs: <nil> [1mSTEP[0m: delete the pod Jan 14 00:29:20.744: INFO: Waiting for pod exec-volume-test-dynamicpv-mlzs to disappear Jan 14 00:29:20.850: INFO: Pod exec-volume-test-dynamicpv-mlzs no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-mlzs Jan 14 00:29:20.850: INFO: Deleting pod "exec-volume-test-dynamicpv-mlzs" in namespace "volume-6656" ... skipping 21 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":35,"completed":4,"skipped":496,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:30:02.664: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 38 lines ... Jan 14 00:27:48.506: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com6tmn9] to have phase Bound Jan 14 00:27:48.613: INFO: PersistentVolumeClaim test.csi.azure.com6tmn9 found but phase is Pending instead of Bound. Jan 14 00:27:50.720: INFO: PersistentVolumeClaim test.csi.azure.com6tmn9 found but phase is Pending instead of Bound. Jan 14 00:27:52.827: INFO: PersistentVolumeClaim test.csi.azure.com6tmn9 found and phase=Bound (4.321741018s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-lr74 [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:27:53.151: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lr74" in namespace "provisioning-4281" to be "Succeeded or Failed" Jan 14 00:27:53.265: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 113.775403ms Jan 14 00:27:55.372: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221152928s Jan 14 00:27:57.480: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328657983s Jan 14 00:27:59.589: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438183558s Jan 14 00:28:01.698: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5465083s Jan 14 00:28:03.806: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655395935s ... skipping 33 lines ... Jan 14 00:29:15.497: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.346350664s Jan 14 00:29:17.606: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.454565343s Jan 14 00:29:19.717: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.565733577s Jan 14 00:29:21.825: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.673820709s Jan 14 00:29:23.934: INFO: Pod "pod-subpath-test-dynamicpv-lr74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m30.782820374s [1mSTEP[0m: Saw pod success Jan 14 00:29:23.934: INFO: Pod "pod-subpath-test-dynamicpv-lr74" satisfied condition "Succeeded or Failed" Jan 14 00:29:24.041: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000001 pod pod-subpath-test-dynamicpv-lr74 container test-container-volume-dynamicpv-lr74: <nil> [1mSTEP[0m: delete the pod Jan 14 00:29:24.288: INFO: Waiting for pod pod-subpath-test-dynamicpv-lr74 to disappear Jan 14 00:29:24.395: INFO: Pod pod-subpath-test-dynamicpv-lr74 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-lr74 Jan 14 00:29:24.395: INFO: Deleting pod "pod-subpath-test-dynamicpv-lr74" in namespace "provisioning-4281" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support non-existent path [90mtest/e2e/storage/testsuites/subpath.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":31,"completed":4,"skipped":295,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:30:36.935: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 22 lines ... Jan 14 00:29:40.397: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comf7t2t] to have phase Bound Jan 14 00:29:40.503: INFO: PersistentVolumeClaim test.csi.azure.comf7t2t found but phase is Pending instead of Bound. Jan 14 00:29:42.611: INFO: PersistentVolumeClaim test.csi.azure.comf7t2t found but phase is Pending instead of Bound. Jan 14 00:29:44.719: INFO: PersistentVolumeClaim test.csi.azure.comf7t2t found and phase=Bound (4.321839396s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-qjbc [1mSTEP[0m: Creating a pod to test exec-volume-test Jan 14 00:29:45.039: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-qjbc" in namespace "volume-876" to be "Succeeded or Failed" Jan 14 00:29:45.146: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 107.071044ms Jan 14 00:29:47.254: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214231348s Jan 14 00:29:49.361: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321155698s Jan 14 00:29:51.468: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428547959s Jan 14 00:29:53.576: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537056773s Jan 14 00:29:55.684: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644724098s Jan 14 00:29:57.792: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.752900605s Jan 14 00:29:59.899: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860080157s Jan 14 00:30:02.008: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.968387136s Jan 14 00:30:04.116: INFO: Pod "exec-volume-test-dynamicpv-qjbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.076115548s [1mSTEP[0m: Saw pod success Jan 14 00:30:04.116: INFO: Pod "exec-volume-test-dynamicpv-qjbc" satisfied condition "Succeeded or Failed" Jan 14 00:30:04.223: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000002 pod exec-volume-test-dynamicpv-qjbc container exec-container-dynamicpv-qjbc: <nil> [1mSTEP[0m: delete the pod Jan 14 00:30:04.454: INFO: Waiting for pod exec-volume-test-dynamicpv-qjbc to disappear Jan 14 00:30:04.560: INFO: Pod exec-volume-test-dynamicpv-qjbc no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-qjbc Jan 14 00:30:04.560: INFO: Deleting pod "exec-volume-test-dynamicpv-qjbc" in namespace "volume-876" ... skipping 27 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext3)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume","total":28,"completed":6,"skipped":492,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:31:16.938: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 143 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":3,"skipped":451,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 80 lines ... Jan 14 00:24:38.392: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json Jan 14 00:24:38.395: INFO: ExecWithOptions: Clientset creation Jan 14 00:24:38.395: INFO: ExecWithOptions: execute(POST https://kubetest-rpwnaldb.westeurope.cloudapp.azure.com/api/v1/namespaces/multivolume-4631/pods/pod-52973b3d-b2a8-4d2a-86ec-373846f20b18/exec?command=%2Fbin%2Fsh&command=-c&command=dd+if%3D%2Fmnt%2Fvolume2++bs%3D64+count%3D1+%7C+sha256sum+%7C+grep+-Fq+c7b5d25b8703309bd455303a5910ca0997a35fd9f69aefb7e6ec16dea61d3678&container=write-pod&container=write-pod&stderr=true&stdout=true) Jan 14 00:24:39.270: INFO: Deleting pod "pod-52973b3d-b2a8-4d2a-86ec-373846f20b18" in namespace "multivolume-4631" Jan 14 00:24:39.380: INFO: Wait up to 5m0s for pod "pod-52973b3d-b2a8-4d2a-86ec-373846f20b18" to be fully deleted [1mSTEP[0m: Creating pod on {Name: Selector:map[] Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:NotIn,Values:[k8s-agentpool1-35908214-vmss000001],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,}} with multiple volumes Jan 14 00:29:44.041: FAIL: Unexpected error: <*errors.errorString | 0xc00002b530>: { s: "pod \"pod-981b905d-5833-4ed6-9f4e-f76f6285d308\" is not Running: timed out waiting for the condition", } pod "pod-981b905d-5833-4ed6-9f4e-f76f6285d308" is not Running: timed out waiting for the condition occurred ... skipping 63 lines ... Jan 14 00:31:54.991: INFO: At 2023-01-14 00:24:28 +0000 UTC - event for pod-52973b3d-b2a8-4d2a-86ec-373846f20b18: {kubelet k8s-agentpool1-35908214-vmss000001} Started: Started container write-pod Jan 14 00:31:54.991: INFO: At 2023-01-14 00:24:39 +0000 UTC - event for pod-52973b3d-b2a8-4d2a-86ec-373846f20b18: {kubelet k8s-agentpool1-35908214-vmss000001} Killing: Stopping container write-pod Jan 14 00:31:54.991: INFO: At 2023-01-14 00:24:43 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {default-scheduler } Scheduled: Successfully assigned multivolume-4631/pod-981b905d-5833-4ed6-9f4e-f76f6285d308 to k8s-agentpool1-35908214-vmss000000 Jan 14 00:31:54.991: INFO: At 2023-01-14 00:25:45 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" Jan 14 00:31:54.991: INFO: At 2023-01-14 00:25:45 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" Jan 14 00:31:54.991: INFO: At 2023-01-14 00:26:46 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[kube-api-access-d5zdx volume1 volume2]: timed out waiting for the condition Jan 14 00:31:54.991: INFO: At 2023-01-14 00:27:45 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} FailedMapVolume: MapVolume.SetUpDevice failed for volume "pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" : rpc error: code = DeadlineExceeded desc = context deadline exceeded Jan 14 00:31:54.991: INFO: At 2023-01-14 00:27:45 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} FailedMapVolume: MapVolume.SetUpDevice failed for volume "pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" : rpc error: code = DeadlineExceeded desc = context deadline exceeded Jan 14 00:31:54.991: INFO: At 2023-01-14 00:29:00 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume2 volume1], unattached volumes=[volume2 kube-api-access-d5zdx volume1]: timed out waiting for the condition Jan 14 00:31:54.991: INFO: At 2023-01-14 00:30:27 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9/dev" Jan 14 00:31:54.991: INFO: At 2023-01-14 00:30:27 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283/dev" Jan 14 00:31:54.991: INFO: At 2023-01-14 00:30:27 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" volumeMapPath "/var/lib/kubelet/pods/55d2495f-c5cd-4473-b1a4-d5d97768d9e0/volumeDevices/kubernetes.io~csi" Jan 14 00:31:54.991: INFO: At 2023-01-14 00:30:27 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" Jan 14 00:31:54.991: INFO: At 2023-01-14 00:30:27 +0000 UTC - event for pod-981b905d-5833-4ed6-9f4e-f76f6285d308: {kubelet k8s-agentpool1-35908214-vmss000000} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" volumeMapPath "/var/lib/kubelet/pods/55d2495f-c5cd-4473-b1a4-d5d97768d9e0/volumeDevices/kubernetes.io~csi" ... skipping 124 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m [91m[1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node [Measurement][0m [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [91mJan 14 00:29:44.041: Unexpected error: <*errors.errorString | 0xc00002b530>: { s: "pod \"pod-981b905d-5833-4ed6-9f4e-f76f6285d308\" is not Running: timed out waiting for the condition", } pod "pod-981b905d-5833-4ed6-9f4e-f76f6285d308" is not Running: timed out waiting for the condition occurred[0m test/e2e/storage/testsuites/multivolume.go:497 [90m------------------------------[0m {"msg":"FAILED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":34,"completed":1,"skipped":254,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral[0m [1mshould create read/write inline ephemeral volume[0m [37mtest/e2e/storage/testsuites/ephemeral.go:196[0m ... skipping 45 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":35,"completed":3,"skipped":334,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:138[0m ... skipping 188 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":35,"completed":5,"skipped":575,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the single read-only volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:423[0m ... skipping 88 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":34,"completed":4,"skipped":186,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] provisioning[0m [1mshould provision storage with snapshot data source [Feature:VolumeSnapshotDataSource][0m [37mtest/e2e/storage/testsuites/provisioning.go:208[0m ... skipping 130 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":34,"completed":2,"skipped":255,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 119 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents","total":31,"completed":5,"skipped":370,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:34:32.724: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 45 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Inline-volume (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDriver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 5 lines ... test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:34:32.748: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:194 Jan 14 00:34:33.495: INFO: Driver didn't provide topology keys -- skipping [AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology test/e2e/framework/framework.go:188 Jan 14 00:34:33.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "topology-6361" for this suite. [36m[1mS [SKIPPING] [0.964 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (immediate binding)] topology [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement][0m [90mtest/e2e/storage/testsuites/topology.go:194[0m [36mDriver didn't provide topology keys -- skipping[0m test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m ... skipping 85 lines ... Jan 14 00:32:25.251: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comn9p4v] to have phase Bound Jan 14 00:32:25.358: INFO: PersistentVolumeClaim test.csi.azure.comn9p4v found but phase is Pending instead of Bound. Jan 14 00:32:27.466: INFO: PersistentVolumeClaim test.csi.azure.comn9p4v found but phase is Pending instead of Bound. Jan 14 00:32:29.576: INFO: PersistentVolumeClaim test.csi.azure.comn9p4v found and phase=Bound (4.324256106s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-z85c [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:32:29.903: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-z85c" in namespace "provisioning-7929" to be "Succeeded or Failed" Jan 14 00:32:30.011: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 108.416824ms Jan 14 00:32:32.119: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216121645s Jan 14 00:32:34.231: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327625647s Jan 14 00:32:36.339: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436165079s Jan 14 00:32:38.448: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544600166s Jan 14 00:32:40.556: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652935688s ... skipping 12 lines ... Jan 14 00:33:07.961: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.057607842s Jan 14 00:33:10.069: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.166383994s Jan 14 00:33:12.181: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.277742371s Jan 14 00:33:14.290: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.386731341s Jan 14 00:33:16.397: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.49412652s [1mSTEP[0m: Saw pod success Jan 14 00:33:16.397: INFO: Pod "pod-subpath-test-dynamicpv-z85c" satisfied condition "Succeeded or Failed" Jan 14 00:33:16.504: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000002 pod pod-subpath-test-dynamicpv-z85c container test-container-subpath-dynamicpv-z85c: <nil> [1mSTEP[0m: delete the pod Jan 14 00:33:16.727: INFO: Waiting for pod pod-subpath-test-dynamicpv-z85c to disappear Jan 14 00:33:16.834: INFO: Pod pod-subpath-test-dynamicpv-z85c no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-z85c Jan 14 00:33:16.834: INFO: Deleting pod "pod-subpath-test-dynamicpv-z85c" in namespace "provisioning-7929" [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-z85c [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:33:17.052: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-z85c" in namespace "provisioning-7929" to be "Succeeded or Failed" Jan 14 00:33:17.159: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 106.952213ms Jan 14 00:33:19.268: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216585686s Jan 14 00:33:21.376: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324678365s Jan 14 00:33:23.484: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432584972s Jan 14 00:33:25.592: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540443614s Jan 14 00:33:27.702: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650449373s ... skipping 6 lines ... Jan 14 00:33:42.462: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.410065101s Jan 14 00:33:44.571: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.518908948s Jan 14 00:33:46.678: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.626795348s Jan 14 00:33:48.788: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.736338761s Jan 14 00:33:50.896: INFO: Pod "pod-subpath-test-dynamicpv-z85c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.844051345s [1mSTEP[0m: Saw pod success Jan 14 00:33:50.896: INFO: Pod "pod-subpath-test-dynamicpv-z85c" satisfied condition "Succeeded or Failed" Jan 14 00:33:51.003: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000002 pod pod-subpath-test-dynamicpv-z85c container test-container-subpath-dynamicpv-z85c: <nil> [1mSTEP[0m: delete the pod Jan 14 00:33:51.230: INFO: Waiting for pod pod-subpath-test-dynamicpv-z85c to disappear Jan 14 00:33:51.337: INFO: Pod pod-subpath-test-dynamicpv-z85c no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-z85c Jan 14 00:33:51.337: INFO: Deleting pod "pod-subpath-test-dynamicpv-z85c" in namespace "provisioning-7929" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing directories when readOnly specified in the volumeSource [90mtest/e2e/storage/testsuites/subpath.go:397[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":35,"completed":6,"skipped":584,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:35:03.856: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 122 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":28,"completed":7,"skipped":728,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:35:41.400: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 463 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source in parallel [Slow] [90mtest/e2e/storage/testsuites/provisioning.go:459[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]","total":33,"completed":4,"skipped":453,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand test/e2e/storage/framework/testsuite.go:51 Jan 14 00:36:34.652: INFO: Driver "test.csi.azure.com" does not support volume expansion - skipping ... skipping 140 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":31,"completed":6,"skipped":474,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral[0m [1mshould support two pods which have the same volume definition[0m [37mtest/e2e/storage/testsuites/ephemeral.go:216[0m ... skipping 83 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition","total":35,"completed":4,"skipped":336,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:37:55.642: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 90 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition","total":34,"completed":5,"skipped":215,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext3)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 114 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext3)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext3)] volumes should store data","total":28,"completed":8,"skipped":819,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand test/e2e/storage/framework/testsuite.go:51 Jan 14 00:38:06.891: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 333 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source in parallel [Slow] [90mtest/e2e/storage/testsuites/provisioning.go:459[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]","total":35,"completed":7,"skipped":619,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:38:28.761: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 37 lines ... [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":34,"completed":3,"skipped":315,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:36:17.023: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json ... skipping 94 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":34,"completed":4,"skipped":315,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:38:44.601: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 175 lines ... Jan 14 00:37:52.712: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrrgr8] to have phase Bound Jan 14 00:37:52.819: INFO: PersistentVolumeClaim test.csi.azure.comrrgr8 found but phase is Pending instead of Bound. Jan 14 00:37:54.927: INFO: PersistentVolumeClaim test.csi.azure.comrrgr8 found but phase is Pending instead of Bound. Jan 14 00:37:57.035: INFO: PersistentVolumeClaim test.csi.azure.comrrgr8 found and phase=Bound (4.323464s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-7l9k [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:37:57.358: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7l9k" in namespace "provisioning-8094" to be "Succeeded or Failed" Jan 14 00:37:57.465: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 106.874941ms Jan 14 00:37:59.575: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216318769s Jan 14 00:38:01.682: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323480122s Jan 14 00:38:03.790: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431874909s Jan 14 00:38:05.898: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53956474s Jan 14 00:38:08.008: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648998463s ... skipping 5 lines ... Jan 14 00:38:20.660: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 23.301311338s Jan 14 00:38:22.767: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 25.408925341s Jan 14 00:38:24.876: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 27.517307979s Jan 14 00:38:26.985: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Pending", Reason="", readiness=false. Elapsed: 29.626255049s Jan 14 00:38:29.093: INFO: Pod "pod-subpath-test-dynamicpv-7l9k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.734624748s [1mSTEP[0m: Saw pod success Jan 14 00:38:29.093: INFO: Pod "pod-subpath-test-dynamicpv-7l9k" satisfied condition "Succeeded or Failed" Jan 14 00:38:29.200: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000001 pod pod-subpath-test-dynamicpv-7l9k container test-container-subpath-dynamicpv-7l9k: <nil> [1mSTEP[0m: delete the pod Jan 14 00:38:29.450: INFO: Waiting for pod pod-subpath-test-dynamicpv-7l9k to disappear Jan 14 00:38:29.558: INFO: Pod pod-subpath-test-dynamicpv-7l9k no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-7l9k Jan 14 00:38:29.558: INFO: Deleting pod "pod-subpath-test-dynamicpv-7l9k" in namespace "provisioning-8094" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support readOnly file specified in the volumeMount [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:382[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":31,"completed":7,"skipped":526,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] provisioning[0m [1mshould provision storage with snapshot data source [Feature:VolumeSnapshotDataSource][0m [37mtest/e2e/storage/testsuites/provisioning.go:208[0m ... skipping 122 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource] [90mtest/e2e/storage/testsuites/provisioning.go:208[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]","total":33,"completed":5,"skipped":547,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:39:34.059: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 181 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":28,"completed":9,"skipped":942,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:40:47.691: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 45 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 32 lines ... Jan 14 00:38:00.973: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comq2xw4] to have phase Bound Jan 14 00:38:01.080: INFO: PersistentVolumeClaim test.csi.azure.comq2xw4 found but phase is Pending instead of Bound. Jan 14 00:38:03.190: INFO: PersistentVolumeClaim test.csi.azure.comq2xw4 found but phase is Pending instead of Bound. Jan 14 00:38:05.298: INFO: PersistentVolumeClaim test.csi.azure.comq2xw4 found and phase=Bound (4.324790214s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success Jan 14 00:38:05.730: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-mtq5b" in namespace "snapshotting-4892" to be "Succeeded or Failed" Jan 14 00:38:05.837: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 106.640077ms Jan 14 00:38:07.964: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234408032s Jan 14 00:38:10.073: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342884232s Jan 14 00:38:12.182: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45199437s Jan 14 00:38:14.291: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561241947s Jan 14 00:38:16.399: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.669481775s Jan 14 00:38:18.508: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778018581s Jan 14 00:38:20.616: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.886524998s Jan 14 00:38:22.725: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.995520231s Jan 14 00:38:24.835: INFO: Pod "pvc-snapshottable-tester-mtq5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.10508617s [1mSTEP[0m: Saw pod success Jan 14 00:38:24.835: INFO: Pod "pvc-snapshottable-tester-mtq5b" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim Jan 14 00:38:24.943: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comq2xw4] to have phase Bound Jan 14 00:38:25.050: INFO: PersistentVolumeClaim test.csi.azure.comq2xw4 found and phase=Bound (106.969235ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod Jan 14 00:38:25.395: INFO: Pod pvc-snapshottable-tester-mtq5b has the following logs: ... skipping 35 lines ... Jan 14 00:38:38.231: INFO: WaitUntil finished successfully after 109.909651ms [1mSTEP[0m: getting the snapshot and snapshot content [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC Jan 14 00:38:38.773: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-6f2vq" in namespace "snapshotting-4892" to be "Succeeded or Failed" Jan 14 00:38:38.881: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 107.143195ms Jan 14 00:38:40.989: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216019773s Jan 14 00:38:43.100: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327011151s Jan 14 00:38:45.208: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434951287s Jan 14 00:38:47.317: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543828117s Jan 14 00:38:49.424: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650915755s ... skipping 5 lines ... Jan 14 00:39:02.081: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 23.307208755s Jan 14 00:39:04.188: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 25.414941757s Jan 14 00:39:06.298: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 27.524225987s Jan 14 00:39:08.408: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Pending", Reason="", readiness=false. Elapsed: 29.634084034s Jan 14 00:39:10.517: INFO: Pod "pvc-snapshottable-data-tester-6f2vq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.743221392s [1mSTEP[0m: Saw pod success Jan 14 00:39:10.517: INFO: Pod "pvc-snapshottable-data-tester-6f2vq" satisfied condition "Succeeded or Failed" Jan 14 00:39:10.735: INFO: Pod pvc-snapshottable-data-tester-6f2vq has the following logs: Jan 14 00:39:10.735: INFO: Deleting pod "pvc-snapshottable-data-tester-6f2vq" in namespace "snapshotting-4892" Jan 14 00:39:10.847: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-6f2vq" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot Jan 14 00:40:05.415: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rpwnaldb.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-4892 exec restored-pvc-tester-ww2b8 --namespace=snapshotting-4892 -- cat /mnt/test/data' ... skipping 47 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":34,"completed":6,"skipped":217,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:280[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:39:34.172: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:280 Jan 14 00:39:34.920: INFO: Creating resource for dynamic PV Jan 14 00:39:34.920: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-5872-e2e-scpnp6w [1mSTEP[0m: creating a claim Jan 14 00:39:35.028: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 14 00:39:35.137: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comc4pmq] to have phase Bound Jan 14 00:39:35.243: INFO: PersistentVolumeClaim test.csi.azure.comc4pmq found but phase is Pending instead of Bound. Jan 14 00:39:37.352: INFO: PersistentVolumeClaim test.csi.azure.comc4pmq found but phase is Pending instead of Bound. Jan 14 00:39:39.461: INFO: PersistentVolumeClaim test.csi.azure.comc4pmq found and phase=Bound (4.32380508s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-gvgp [1mSTEP[0m: Checking for subpath error in container status Jan 14 00:40:40.000: INFO: Deleting pod "pod-subpath-test-dynamicpv-gvgp" in namespace "provisioning-5872" Jan 14 00:40:40.111: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-gvgp" to be fully deleted [1mSTEP[0m: Deleting pod Jan 14 00:40:42.327: INFO: Deleting pod "pod-subpath-test-dynamicpv-gvgp" in namespace "provisioning-5872" [1mSTEP[0m: Deleting pvc Jan 14 00:40:42.437: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comc4pmq" ... skipping 16 lines ... [32m• [SLOW TEST:109.887 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:280[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]","total":33,"completed":6,"skipped":658,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:41:24.098: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 243 lines ... Jan 14 00:39:12.422: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comdvjml] to have phase Bound Jan 14 00:39:12.529: INFO: PersistentVolumeClaim test.csi.azure.comdvjml found but phase is Pending instead of Bound. Jan 14 00:39:14.639: INFO: PersistentVolumeClaim test.csi.azure.comdvjml found but phase is Pending instead of Bound. Jan 14 00:39:16.750: INFO: PersistentVolumeClaim test.csi.azure.comdvjml found and phase=Bound (4.327634027s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-lql9 [1mSTEP[0m: Creating a pod to test subpath Jan 14 00:39:17.073: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lql9" in namespace "provisioning-9367" to be "Succeeded or Failed" Jan 14 00:39:17.184: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 110.285114ms Jan 14 00:39:19.293: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219687467s Jan 14 00:39:21.402: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32891784s Jan 14 00:39:23.510: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436890974s Jan 14 00:39:25.618: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544954501s Jan 14 00:39:27.731: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657805732s ... skipping 34 lines ... Jan 14 00:40:41.540: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.466677339s Jan 14 00:40:43.650: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.576424319s Jan 14 00:40:45.759: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.68519901s Jan 14 00:40:47.868: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.794239425s Jan 14 00:40:49.977: INFO: Pod "pod-subpath-test-dynamicpv-lql9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m32.903724261s [1mSTEP[0m: Saw pod success Jan 14 00:40:49.977: INFO: Pod "pod-subpath-test-dynamicpv-lql9" satisfied condition "Succeeded or Failed" Jan 14 00:40:50.086: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000000 pod pod-subpath-test-dynamicpv-lql9 container test-container-volume-dynamicpv-lql9: <nil> [1mSTEP[0m: delete the pod Jan 14 00:40:50.329: INFO: Waiting for pod pod-subpath-test-dynamicpv-lql9 to disappear Jan 14 00:40:50.437: INFO: Pod pod-subpath-test-dynamicpv-lql9 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-lql9 Jan 14 00:40:50.437: INFO: Deleting pod "pod-subpath-test-dynamicpv-lql9" in namespace "provisioning-9367" ... skipping 23 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support existing directory [90mtest/e2e/storage/testsuites/subpath.go:207[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":31,"completed":8,"skipped":545,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (ext4)] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:41:32.322: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 45 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 194 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:248[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node","total":35,"completed":8,"skipped":683,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:41:44.896: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 232 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":34,"completed":7,"skipped":248,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:42:55.531: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/framework/framework.go:188 ... skipping 18 lines ... test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:42:55.572: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename topology [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies test/e2e/storage/testsuites/topology.go:194 Jan 14 00:42:56.327: INFO: Driver didn't provide topology keys -- skipping [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology test/e2e/framework/framework.go:188 Jan 14 00:42:56.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "topology-1688" for this suite. [36m[1mS [SKIPPING] [0.975 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (delayed binding)] topology [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to schedule a pod which has topologies that conflict with AllowedTopologies [Measurement][0m [90mtest/e2e/storage/testsuites/topology.go:194[0m [36mDriver didn't provide topology keys -- skipping[0m test/e2e/storage/testsuites/topology.go:126 [90m------------------------------[0m ... skipping 214 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":31,"completed":9,"skipped":796,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral[0m [1mshould support two pods which have the same volume definition[0m [37mtest/e2e/storage/testsuites/ephemeral.go:216[0m ... skipping 61 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support two pods which have the same volume definition [90mtest/e2e/storage/testsuites/ephemeral.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition","total":34,"completed":5,"skipped":639,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] volumes[0m [1mshould store data[0m [37mtest/e2e/storage/testsuites/volumes.go:161[0m ... skipping 104 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should store data [90mtest/e2e/storage/testsuites/volumes.go:161[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":35,"completed":9,"skipped":716,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral test/e2e/storage/framework/testsuite.go:51 Jan 14 00:44:06.057: INFO: Driver "test.csi.azure.com" does not support volume type "CSIInlineVolume" - skipping ... skipping 79 lines ... [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] provisioning[0m [1mshould provision storage with pvc data source[0m [37mtest/e2e/storage/testsuites/provisioning.go:421[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":35,"completed":5,"skipped":445,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:41:26.249: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json ... skipping 98 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90mtest/e2e/storage/testsuites/provisioning.go:421[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":35,"completed":6,"skipped":445,"failed":0} [36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:44:30.493: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 86 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":33,"completed":7,"skipped":907,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:44:59.684: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 131 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90mtest/e2e/storage/testsuites/ephemeral.go:254[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":34,"completed":8,"skipped":340,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support file as subpath [LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:232[0m ... skipping 17 lines ... Jan 14 00:43:37.316: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com2xx4f] to have phase Bound Jan 14 00:43:37.424: INFO: PersistentVolumeClaim test.csi.azure.com2xx4f found but phase is Pending instead of Bound. Jan 14 00:43:39.533: INFO: PersistentVolumeClaim test.csi.azure.com2xx4f found but phase is Pending instead of Bound. Jan 14 00:43:41.642: INFO: PersistentVolumeClaim test.csi.azure.com2xx4f found and phase=Bound (4.32546182s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-bwtt [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jan 14 00:43:41.967: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bwtt" in namespace "provisioning-9699" to be "Succeeded or Failed" Jan 14 00:43:42.075: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Pending", Reason="", readiness=false. Elapsed: 108.398494ms Jan 14 00:43:44.185: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218662311s Jan 14 00:43:46.294: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327082438s Jan 14 00:43:48.402: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435648431s Jan 14 00:43:50.511: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544444104s Jan 14 00:43:52.620: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653397055s ... skipping 14 lines ... Jan 14 00:44:24.258: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Running", Reason="", readiness=true. Elapsed: 42.29168148s Jan 14 00:44:26.366: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Running", Reason="", readiness=true. Elapsed: 44.399726994s Jan 14 00:44:28.475: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Running", Reason="", readiness=true. Elapsed: 46.50783201s Jan 14 00:44:30.584: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Running", Reason="", readiness=false. Elapsed: 48.617505397s Jan 14 00:44:32.702: INFO: Pod "pod-subpath-test-dynamicpv-bwtt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.735719456s [1mSTEP[0m: Saw pod success Jan 14 00:44:32.702: INFO: Pod "pod-subpath-test-dynamicpv-bwtt" satisfied condition "Succeeded or Failed" Jan 14 00:44:32.810: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000000 pod pod-subpath-test-dynamicpv-bwtt container test-container-subpath-dynamicpv-bwtt: <nil> [1mSTEP[0m: delete the pod Jan 14 00:44:33.060: INFO: Waiting for pod pod-subpath-test-dynamicpv-bwtt to disappear Jan 14 00:44:33.168: INFO: Pod pod-subpath-test-dynamicpv-bwtt no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-dynamicpv-bwtt Jan 14 00:44:33.168: INFO: Deleting pod "pod-subpath-test-dynamicpv-bwtt" in namespace "provisioning-9699" ... skipping 29 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support file as subpath [LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:232[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":31,"completed":10,"skipped":827,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:45:45.687: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 175 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":35,"completed":10,"skipped":888,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:242[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:45:43.789: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath directory is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:242 Jan 14 00:45:44.543: INFO: Creating resource for dynamic PV Jan 14 00:45:44.543: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-668-e2e-scsmm6f [1mSTEP[0m: creating a claim Jan 14 00:45:44.651: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 14 00:45:44.760: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com8wght] to have phase Bound Jan 14 00:45:44.868: INFO: PersistentVolumeClaim test.csi.azure.com8wght found but phase is Pending instead of Bound. Jan 14 00:45:46.976: INFO: PersistentVolumeClaim test.csi.azure.com8wght found but phase is Pending instead of Bound. Jan 14 00:45:49.083: INFO: PersistentVolumeClaim test.csi.azure.com8wght found and phase=Bound (4.323540582s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-5zqp [1mSTEP[0m: Checking for subpath error in container status Jan 14 00:46:07.627: INFO: Deleting pod "pod-subpath-test-dynamicpv-5zqp" in namespace "provisioning-668" Jan 14 00:46:07.736: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-5zqp" to be fully deleted [1mSTEP[0m: Deleting pod Jan 14 00:46:09.952: INFO: Deleting pod "pod-subpath-test-dynamicpv-5zqp" in namespace "provisioning-668" [1mSTEP[0m: Deleting pvc Jan 14 00:46:10.060: INFO: Deleting PersistentVolumeClaim "test.csi.azure.com8wght" ... skipping 22 lines ... [32m• [SLOW TEST:98.524 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath directory is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:242[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]","total":34,"completed":9,"skipped":367,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral test/e2e/storage/framework/testsuite.go:51 Jan 14 00:47:22.361: INFO: Driver "test.csi.azure.com" does not support volume type "CSIInlineVolume" - skipping ... skipping 24 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:258[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 8 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail to use a volume in a pod with mismatched mode [Slow] [BeforeEach][0m [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 8 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 91 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:378[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":34,"completed":6,"skipped":662,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould concurrently access the single read-only volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:423[0m ... skipping 88 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":31,"completed":11,"skipped":837,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource][0m [0mvolume snapshot controller[0m [90m[0m [1mshould check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)[0m [37mtest/e2e/storage/testsuites/snapshottable.go:278[0m ... skipping 17 lines ... Jan 14 00:44:31.551: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comq4w78] to have phase Bound Jan 14 00:44:31.658: INFO: PersistentVolumeClaim test.csi.azure.comq4w78 found but phase is Pending instead of Bound. Jan 14 00:44:33.766: INFO: PersistentVolumeClaim test.csi.azure.comq4w78 found but phase is Pending instead of Bound. Jan 14 00:44:35.873: INFO: PersistentVolumeClaim test.csi.azure.comq4w78 found and phase=Bound (4.322401334s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success Jan 14 00:44:36.302: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-wwzf8" in namespace "snapshotting-441" to be "Succeeded or Failed" Jan 14 00:44:36.409: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.121228ms Jan 14 00:44:38.517: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215325799s Jan 14 00:44:40.625: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323489069s Jan 14 00:44:42.733: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430932041s Jan 14 00:44:44.840: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538591948s Jan 14 00:44:46.948: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.646046358s Jan 14 00:44:49.056: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754500915s Jan 14 00:44:51.165: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.863183708s Jan 14 00:44:53.274: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.971958885s Jan 14 00:44:55.381: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.079441056s Jan 14 00:44:57.489: INFO: Pod "pvc-snapshottable-tester-wwzf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.187168168s [1mSTEP[0m: Saw pod success Jan 14 00:44:57.489: INFO: Pod "pvc-snapshottable-tester-wwzf8" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim Jan 14 00:44:57.597: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comq4w78] to have phase Bound Jan 14 00:44:57.703: INFO: PersistentVolumeClaim test.csi.azure.comq4w78 found and phase=Bound (106.770612ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod Jan 14 00:44:58.027: INFO: Pod pvc-snapshottable-tester-wwzf8 has the following logs: ... skipping 13 lines ... Jan 14 00:45:05.431: INFO: received snapshotStatus map[boundVolumeSnapshotContentName:snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 creationTime:2023-01-14T00:45:01Z readyToUse:true restoreSize:5Gi] Jan 14 00:45:05.432: INFO: snapshotContentName snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC Jan 14 00:45:05.860: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-kzgjr" in namespace "snapshotting-441" to be "Succeeded or Failed" Jan 14 00:45:05.967: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 106.777409ms Jan 14 00:45:08.074: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214109451s Jan 14 00:45:10.181: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321032556s Jan 14 00:45:12.289: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428950193s Jan 14 00:45:14.398: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537525735s Jan 14 00:45:16.506: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645433338s ... skipping 13 lines ... Jan 14 00:45:46.021: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 40.160783322s Jan 14 00:45:48.140: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 42.279384924s Jan 14 00:45:50.248: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 44.388220191s Jan 14 00:45:52.356: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Pending", Reason="", readiness=false. Elapsed: 46.495692142s Jan 14 00:45:54.464: INFO: Pod "pvc-snapshottable-data-tester-kzgjr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.6038307s [1mSTEP[0m: Saw pod success Jan 14 00:45:54.464: INFO: Pod "pvc-snapshottable-data-tester-kzgjr" satisfied condition "Succeeded or Failed" Jan 14 00:45:54.680: INFO: Pod pvc-snapshottable-data-tester-kzgjr has the following logs: Jan 14 00:45:54.680: INFO: Deleting pod "pvc-snapshottable-data-tester-kzgjr" in namespace "snapshotting-441" Jan 14 00:45:54.794: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-kzgjr" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot Jan 14 00:47:11.334: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rpwnaldb.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-441 exec restored-pvc-tester-zmft4 --namespace=snapshotting-441 -- cat /mnt/test/data' ... skipping 33 lines ... Jan 14 00:47:37.568: INFO: volumesnapshotcontents snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 has been found and is not deleted Jan 14 00:47:38.676: INFO: volumesnapshotcontents snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 has been found and is not deleted Jan 14 00:47:39.784: INFO: volumesnapshotcontents snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 has been found and is not deleted Jan 14 00:47:40.892: INFO: volumesnapshotcontents snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 has been found and is not deleted Jan 14 00:47:41.999: INFO: volumesnapshotcontents snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 has been found and is not deleted Jan 14 00:47:43.108: INFO: volumesnapshotcontents snapcontent-1032ad41-c280-4b6a-90c7-6a0b27f5aad3 has been found and is not deleted Jan 14 00:47:44.108: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 Jan 14 00:47:44.238: INFO: Pod restored-pvc-tester-zmft4 has the following logs: Jan 14 00:47:44.238: INFO: Deleting pod "restored-pvc-tester-zmft4" in namespace "snapshotting-441" Jan 14 00:47:44.345: INFO: Wait up to 5m0s for pod "restored-pvc-tester-zmft4" to be fully deleted Jan 14 00:47:46.560: INFO: deleting claim "snapshotting-441"/"pvc-s9w9g" ... skipping 31 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":35,"completed":7,"skipped":495,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:48:04.745: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 31 lines ... [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 Jan 14 00:46:05.609: INFO: Creating resource for dynamic PV Jan 14 00:46:05.609: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass snapshotting-2467-e2e-scqv8jn [1mSTEP[0m: [init] starting a pod to use the claim Jan 14 00:46:05.828: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-l2dwn" in namespace "snapshotting-2467" to be "Succeeded or Failed" Jan 14 00:46:05.935: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 106.59261ms Jan 14 00:46:08.042: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214425088s Jan 14 00:46:10.150: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321919104s Jan 14 00:46:12.261: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432546579s Jan 14 00:46:14.368: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540091322s Jan 14 00:46:16.477: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648917117s Jan 14 00:46:18.586: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757757141s Jan 14 00:46:20.694: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.866204816s Jan 14 00:46:22.802: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.973795161s Jan 14 00:46:24.909: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.081413108s Jan 14 00:46:27.017: INFO: Pod "pvc-snapshottable-tester-l2dwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.188771898s [1mSTEP[0m: Saw pod success Jan 14 00:46:27.017: INFO: Pod "pvc-snapshottable-tester-l2dwn" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim [1mSTEP[0m: creating a SnapshotClass [1mSTEP[0m: creating a dynamic VolumeSnapshot Jan 14 00:46:27.449: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-84vtv to become ready Jan 14 00:46:27.558: INFO: VolumeSnapshot snapshot-84vtv found but is not ready. Jan 14 00:46:29.666: INFO: VolumeSnapshot snapshot-84vtv found but is not ready. ... skipping 49 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works, check deletion (ephemeral) [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":35,"completed":11,"skipped":933,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:48:05.170: INFO: Distro debian doesn't support ntfs -- skipping [AfterEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes test/e2e/framework/framework.go:188 ... skipping 64 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] volumeIO [90mtest/e2e/storage/framework/testsuite.go:50[0m should write files of various sizes, verify size, validate content [Slow] [90mtest/e2e/storage/testsuites/volume_io.go:149[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]","total":34,"completed":10,"skipped":611,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:48:30.539: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 45 lines ... test/e2e/storage/testsuites/multivolume.go:250 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould fail if subpath file is outside the volume [Slow][LinuxOnly][0m [37mtest/e2e/storage/testsuites/subpath.go:258[0m [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:47:22.929: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename provisioning [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] test/e2e/storage/testsuites/subpath.go:258 Jan 14 00:47:23.674: INFO: Creating resource for dynamic PV Jan 14 00:47:23.674: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass provisioning-6858-e2e-sc4wrss [1mSTEP[0m: creating a claim Jan 14 00:47:23.781: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 14 00:47:23.889: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comvwcdm] to have phase Bound Jan 14 00:47:23.996: INFO: PersistentVolumeClaim test.csi.azure.comvwcdm found but phase is Pending instead of Bound. Jan 14 00:47:26.104: INFO: PersistentVolumeClaim test.csi.azure.comvwcdm found but phase is Pending instead of Bound. Jan 14 00:47:28.211: INFO: PersistentVolumeClaim test.csi.azure.comvwcdm found and phase=Bound (4.321609239s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-h982 [1mSTEP[0m: Checking for subpath error in container status Jan 14 00:47:50.746: INFO: Deleting pod "pod-subpath-test-dynamicpv-h982" in namespace "provisioning-6858" Jan 14 00:47:50.855: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-h982" to be fully deleted [1mSTEP[0m: Deleting pod Jan 14 00:47:53.069: INFO: Deleting pod "pod-subpath-test-dynamicpv-h982" in namespace "provisioning-6858" [1mSTEP[0m: Deleting pvc Jan 14 00:47:53.175: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comvwcdm" ... skipping 16 lines ... [32m• [SLOW TEST:71.868 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail if subpath file is outside the volume [Slow][LinuxOnly] [90mtest/e2e/storage/testsuites/subpath.go:258[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]","total":34,"completed":7,"skipped":666,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:209[0m ... skipping 207 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":33,"completed":8,"skipped":997,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m Jan 14 00:48:54.516: INFO: Running AfterSuite actions on all nodes Jan 14 00:48:54.516: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jan 14 00:48:54.516: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 205 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":31,"completed":12,"skipped":851,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould concurrently access the single read-only volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:423[0m ... skipping 88 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single read-only volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:423[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node","total":34,"completed":11,"skipped":669,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes test/e2e/storage/framework/testsuite.go:51 Jan 14 00:50:42.465: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 38 lines ... Jan 14 00:48:06.176: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com79z9g] to have phase Bound Jan 14 00:48:06.283: INFO: PersistentVolumeClaim test.csi.azure.com79z9g found but phase is Pending instead of Bound. Jan 14 00:48:08.391: INFO: PersistentVolumeClaim test.csi.azure.com79z9g found but phase is Pending instead of Bound. Jan 14 00:48:10.498: INFO: PersistentVolumeClaim test.csi.azure.com79z9g found and phase=Bound (4.322255122s) [1mSTEP[0m: [init] starting a pod to use the claim [1mSTEP[0m: [init] check pod success Jan 14 00:48:10.928: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-mlsg2" in namespace "snapshotting-3754" to be "Succeeded or Failed" Jan 14 00:48:11.035: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 106.964364ms Jan 14 00:48:13.142: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214164518s Jan 14 00:48:15.249: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321360129s Jan 14 00:48:17.357: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42961545s Jan 14 00:48:19.465: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537090966s Jan 14 00:48:21.572: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644466805s ... skipping 4 lines ... Jan 14 00:48:32.114: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.185873486s Jan 14 00:48:34.226: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.298069276s Jan 14 00:48:36.334: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.405986989s Jan 14 00:48:38.442: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.514246886s Jan 14 00:48:40.560: INFO: Pod "pvc-snapshottable-tester-mlsg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.632619478s [1mSTEP[0m: Saw pod success Jan 14 00:48:40.561: INFO: Pod "pvc-snapshottable-tester-mlsg2" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim Jan 14 00:48:40.668: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com79z9g] to have phase Bound Jan 14 00:48:40.776: INFO: PersistentVolumeClaim test.csi.azure.com79z9g found and phase=Bound (107.491041ms) [1mSTEP[0m: [init] checking the PV [1mSTEP[0m: [init] deleting the pod Jan 14 00:48:41.100: INFO: Pod pvc-snapshottable-tester-mlsg2 has the following logs: ... skipping 33 lines ... Jan 14 00:48:49.495: INFO: WaitUntil finished successfully after 107.505999ms [1mSTEP[0m: getting the snapshot and snapshot content [1mSTEP[0m: checking the snapshot [1mSTEP[0m: checking the SnapshotContent [1mSTEP[0m: Modifying source data test [1mSTEP[0m: modifying the data in the source PVC Jan 14 00:48:50.035: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-data-tester-xzsf5" in namespace "snapshotting-3754" to be "Succeeded or Failed" Jan 14 00:48:50.142: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 107.098373ms Jan 14 00:48:52.250: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214829499s Jan 14 00:48:54.358: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323062417s Jan 14 00:48:56.466: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430759103s Jan 14 00:48:58.574: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539029996s Jan 14 00:49:00.683: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.647869863s ... skipping 6 lines ... Jan 14 00:49:15.443: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.40829664s Jan 14 00:49:17.550: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.515611271s Jan 14 00:49:19.659: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.624221962s Jan 14 00:49:21.767: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.731740848s Jan 14 00:49:23.875: INFO: Pod "pvc-snapshottable-data-tester-xzsf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.840525407s [1mSTEP[0m: Saw pod success Jan 14 00:49:23.875: INFO: Pod "pvc-snapshottable-data-tester-xzsf5" satisfied condition "Succeeded or Failed" Jan 14 00:49:24.114: INFO: Pod pvc-snapshottable-data-tester-xzsf5 has the following logs: Jan 14 00:49:24.114: INFO: Deleting pod "pvc-snapshottable-data-tester-xzsf5" in namespace "snapshotting-3754" Jan 14 00:49:24.224: INFO: Wait up to 5m0s for pod "pvc-snapshottable-data-tester-xzsf5" to be fully deleted [1mSTEP[0m: creating a pvc from the snapshot [1mSTEP[0m: starting a pod to use the snapshot Jan 14 00:50:14.767: INFO: Running '/usr/local/bin/kubectl --server=https://kubetest-rpwnaldb.westeurope.cloudapp.azure.com --kubeconfig=/root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json --namespace=snapshotting-3754 exec restored-pvc-tester-nz7vv --namespace=snapshotting-3754 -- cat /mnt/test/data' ... skipping 33 lines ... Jan 14 00:50:40.979: INFO: volumesnapshotcontents pre-provisioned-snapcontent-57020885-0872-470b-865a-aed9535cb799 has been found and is not deleted Jan 14 00:50:42.087: INFO: volumesnapshotcontents pre-provisioned-snapcontent-57020885-0872-470b-865a-aed9535cb799 has been found and is not deleted Jan 14 00:50:43.195: INFO: volumesnapshotcontents pre-provisioned-snapcontent-57020885-0872-470b-865a-aed9535cb799 has been found and is not deleted Jan 14 00:50:44.303: INFO: volumesnapshotcontents pre-provisioned-snapcontent-57020885-0872-470b-865a-aed9535cb799 has been found and is not deleted Jan 14 00:50:45.412: INFO: volumesnapshotcontents pre-provisioned-snapcontent-57020885-0872-470b-865a-aed9535cb799 has been found and is not deleted Jan 14 00:50:46.520: INFO: volumesnapshotcontents pre-provisioned-snapcontent-57020885-0872-470b-865a-aed9535cb799 has been found and is not deleted Jan 14 00:50:47.520: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 Jan 14 00:50:47.660: INFO: Pod restored-pvc-tester-nz7vv has the following logs: Jan 14 00:50:47.660: INFO: Deleting pod "restored-pvc-tester-nz7vv" in namespace "snapshotting-3754" Jan 14 00:50:47.768: INFO: deleting claim "snapshotting-3754"/"pvc-l8qp5" Jan 14 00:50:47.883: INFO: deleting snapshot "snapshotting-3754"/"pre-provisioned-snapshot-57020885-0872-470b-865a-aed9535cb799" ... skipping 30 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent) [90mtest/e2e/storage/testsuites/snapshottable.go:278[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)","total":35,"completed":12,"skipped":965,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m Jan 14 00:51:05.998: INFO: Running AfterSuite actions on all nodes Jan 14 00:51:05.998: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jan 14 00:51:05.998: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 213 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:138[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node","total":35,"completed":8,"skipped":574,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] ... skipping 55 lines ... Jan 14 00:50:38.226: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comxkv49] to have phase Bound Jan 14 00:50:38.333: INFO: PersistentVolumeClaim test.csi.azure.comxkv49 found but phase is Pending instead of Bound. Jan 14 00:50:40.443: INFO: PersistentVolumeClaim test.csi.azure.comxkv49 found but phase is Pending instead of Bound. Jan 14 00:50:42.550: INFO: PersistentVolumeClaim test.csi.azure.comxkv49 found and phase=Bound (4.324331337s) [1mSTEP[0m: Creating pod pod-subpath-test-dynamicpv-q9zv [1mSTEP[0m: Creating a pod to test multi_subpath Jan 14 00:50:42.874: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-q9zv" in namespace "provisioning-5244" to be "Succeeded or Failed" Jan 14 00:50:42.982: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 107.042647ms Jan 14 00:50:45.091: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216306894s Jan 14 00:50:47.199: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32427799s Jan 14 00:50:49.310: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435318088s Jan 14 00:50:51.418: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543598096s Jan 14 00:50:53.526: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650986262s Jan 14 00:50:55.636: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.761420735s Jan 14 00:50:57.744: INFO: Pod "pod-subpath-test-dynamicpv-q9zv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.869592288s [1mSTEP[0m: Saw pod success Jan 14 00:50:57.744: INFO: Pod "pod-subpath-test-dynamicpv-q9zv" satisfied condition "Succeeded or Failed" Jan 14 00:50:57.851: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000000 pod pod-subpath-test-dynamicpv-q9zv container test-container-subpath-dynamicpv-q9zv: <nil> [1mSTEP[0m: delete the pod Jan 14 00:50:58.098: INFO: Waiting for pod pod-subpath-test-dynamicpv-q9zv to disappear Jan 14 00:50:58.204: INFO: Pod pod-subpath-test-dynamicpv-q9zv no longer exists [1mSTEP[0m: Deleting pod Jan 14 00:50:58.204: INFO: Deleting pod "pod-subpath-test-dynamicpv-q9zv" in namespace "provisioning-5244" ... skipping 25 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support creating multiple subpath from same volumes [Slow] [90mtest/e2e/storage/testsuites/subpath.go:296[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]","total":31,"completed":13,"skipped":868,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 192 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":34,"completed":8,"skipped":676,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral[0m [1mshould support multiple inline ephemeral volumes[0m [37mtest/e2e/storage/testsuites/ephemeral.go:254[0m ... skipping 51 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should support multiple inline ephemeral volumes [90mtest/e2e/storage/testsuites/ephemeral.go:254[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":35,"completed":9,"skipped":645,"failed":0} [36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS][0m [37mtest/e2e/storage/testsuites/multivolume.go:323[0m ... skipping 102 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":34,"completed":12,"skipped":695,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 00:53:56.472: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 13 lines ... test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (filesystem volmode)] volumeMode[0m [1mshould fail to use a volume in a pod with mismatched mode [Slow][0m [37mtest/e2e/storage/testsuites/volumemode.go:299[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:53:35.147: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json [1mSTEP[0m: Building a namespace api object, basename volumemode [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to use a volume in a pod with mismatched mode [Slow] test/e2e/storage/testsuites/volumemode.go:299 Jan 14 00:53:35.900: INFO: Creating resource for dynamic PV Jan 14 00:53:35.901: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass volumemode-9058-e2e-scpzkhx [1mSTEP[0m: creating a claim Jan 14 00:53:36.118: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comnbwcj] to have phase Bound Jan 14 00:53:36.226: INFO: PersistentVolumeClaim test.csi.azure.comnbwcj found but phase is Pending instead of Bound. Jan 14 00:53:38.334: INFO: PersistentVolumeClaim test.csi.azure.comnbwcj found but phase is Pending instead of Bound. Jan 14 00:53:40.443: INFO: PersistentVolumeClaim test.csi.azure.comnbwcj found and phase=Bound (4.324009839s) [1mSTEP[0m: Creating pod [1mSTEP[0m: Waiting for the pod to fail Jan 14 00:53:43.098: INFO: Deleting pod "pod-c4e8fde3-c894-41d0-b30e-3b0c735e4631" in namespace "volumemode-9058" Jan 14 00:53:43.211: INFO: Wait up to 5m0s for pod "pod-c4e8fde3-c894-41d0-b30e-3b0c735e4631" to be fully deleted [1mSTEP[0m: Deleting pvc Jan 14 00:53:45.428: INFO: Deleting PersistentVolumeClaim "test.csi.azure.comnbwcj" Jan 14 00:53:45.539: INFO: Waiting up to 5m0s for PersistentVolume pvc-6eacb47c-c2a0-4ce1-908f-229b685d388f to get deleted Jan 14 00:53:45.647: INFO: PersistentVolume pvc-6eacb47c-c2a0-4ce1-908f-229b685d388f found and phase=Released (107.284889ms) ... skipping 20 lines ... [32m• [SLOW TEST:82.563 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [90mtest/e2e/storage/framework/testsuite.go:50[0m should fail to use a volume in a pod with mismatched mode [Slow] [90mtest/e2e/storage/testsuites/volumemode.go:299[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]","total":35,"completed":10,"skipped":646,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould access to two volumes with different volume mode and retain data across pod recreation on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:209[0m ... skipping 188 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with different volume mode and retain data across pod recreation on the same node [90mtest/e2e/storage/testsuites/multivolume.go:209[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node","total":31,"completed":14,"skipped":947,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS][0m [37mtest/e2e/storage/testsuites/multivolume.go:323[0m ... skipping 108 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS] [90mtest/e2e/storage/testsuites/multivolume.go:323[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]","total":34,"completed":9,"skipped":727,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:56:21.747: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:280[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 8 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:242[0m [36mDistro debian doesn't support ntfs -- skipping[0m test/e2e/storage/framework/testsuite.go:127 [90m------------------------------[0m ... skipping 280 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","total":35,"completed":11,"skipped":649,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:57:22.949: INFO: Driver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping ... skipping 3 lines ... [36m[1mS [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds][0m External Storage [Driver: test.csi.azure.com] [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Pre-provisioned PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90mtest/e2e/storage/testsuites/subpath.go:269[0m [36mDriver "test.csi.azure.com" does not support volume type "PreprovisionedPV" - skipping[0m test/e2e/storage/external/external.go:262 [90m------------------------------[0m ... skipping 106 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] provisioning [90mtest/e2e/storage/framework/testsuite.go:50[0m should provision storage with pvc data source [90mtest/e2e/storage/testsuites/provisioning.go:421[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":31,"completed":15,"skipped":978,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] volumes[0m [1mshould allow exec of files on the volume[0m [37mtest/e2e/storage/testsuites/volumes.go:198[0m ... skipping 17 lines ... Jan 14 00:56:22.782: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.comrnhmp] to have phase Bound Jan 14 00:56:22.889: INFO: PersistentVolumeClaim test.csi.azure.comrnhmp found but phase is Pending instead of Bound. Jan 14 00:56:24.998: INFO: PersistentVolumeClaim test.csi.azure.comrnhmp found but phase is Pending instead of Bound. Jan 14 00:56:27.106: INFO: PersistentVolumeClaim test.csi.azure.comrnhmp found and phase=Bound (4.324770455s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-pppf [1mSTEP[0m: Creating a pod to test exec-volume-test Jan 14 00:56:27.430: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-pppf" in namespace "volume-4789" to be "Succeeded or Failed" Jan 14 00:56:27.537: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 107.424796ms Jan 14 00:56:29.646: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216013067s Jan 14 00:56:31.754: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324310787s Jan 14 00:56:33.862: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432478256s Jan 14 00:56:35.971: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540738468s Jan 14 00:56:38.080: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649700175s ... skipping 2 lines ... Jan 14 00:56:44.410: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.979907358s Jan 14 00:56:46.518: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.087959145s Jan 14 00:56:48.627: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.197157372s Jan 14 00:56:50.736: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Pending", Reason="", readiness=false. Elapsed: 23.305948611s Jan 14 00:56:52.845: INFO: Pod "exec-volume-test-dynamicpv-pppf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.415497916s [1mSTEP[0m: Saw pod success Jan 14 00:56:52.846: INFO: Pod "exec-volume-test-dynamicpv-pppf" satisfied condition "Succeeded or Failed" Jan 14 00:56:52.954: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000001 pod exec-volume-test-dynamicpv-pppf container exec-container-dynamicpv-pppf: <nil> [1mSTEP[0m: delete the pod Jan 14 00:56:53.216: INFO: Waiting for pod exec-volume-test-dynamicpv-pppf to disappear Jan 14 00:56:53.324: INFO: Pod exec-volume-test-dynamicpv-pppf no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-pppf Jan 14 00:56:53.324: INFO: Deleting pod "exec-volume-test-dynamicpv-pppf" in namespace "volume-4789" ... skipping 27 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume","total":34,"completed":10,"skipped":783,"failed":1,"failures":["External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node"]} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 Jan 14 00:58:05.800: INFO: Driver "test.csi.azure.com" does not support volume type "InlineVolume" - skipping ... skipping 110 lines ... [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow][0m [1mshould concurrently access the single volume from pods on the same node[0m [37mtest/e2e/storage/testsuites/multivolume.go:298[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":34,"completed":13,"skipped":839,"failed":0} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] test/e2e/framework/framework.go:187 [1mSTEP[0m: Creating a kubernetes client Jan 14 00:56:33.663: INFO: >>> kubeConfig: /root/tmp3639031375/kubeconfig/kubeconfig.westeurope.json ... skipping 150 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should concurrently access the single volume from pods on the same node [90mtest/e2e/storage/testsuites/multivolume.go:298[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node","total":34,"completed":14,"skipped":839,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 69 lines ... test/e2e/framework/framework.go:188 Jan 14 00:58:45.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "volumelimits-9811" for this suite. [32m•[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":34,"completed":15,"skipped":882,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral[0m [1mshould create read-only inline ephemeral volume[0m [37mtest/e2e/storage/testsuites/ephemeral.go:175[0m ... skipping 46 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read-only inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:175[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":31,"completed":16,"skipped":989,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m Jan 14 00:59:36.920: INFO: Running AfterSuite actions on all nodes Jan 14 00:59:36.920: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jan 14 00:59:36.920: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 27 lines ... Jan 14 00:58:46.580: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [test.csi.azure.com5k5nl] to have phase Bound Jan 14 00:58:46.688: INFO: PersistentVolumeClaim test.csi.azure.com5k5nl found but phase is Pending instead of Bound. Jan 14 00:58:48.796: INFO: PersistentVolumeClaim test.csi.azure.com5k5nl found but phase is Pending instead of Bound. Jan 14 00:58:50.904: INFO: PersistentVolumeClaim test.csi.azure.com5k5nl found and phase=Bound (4.324312096s) [1mSTEP[0m: Creating pod exec-volume-test-dynamicpv-pvt5 [1mSTEP[0m: Creating a pod to test exec-volume-test Jan 14 00:58:51.228: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-pvt5" in namespace "volume-3341" to be "Succeeded or Failed" Jan 14 00:58:51.336: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 107.705348ms Jan 14 00:58:53.444: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216139077s Jan 14 00:58:55.557: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328632828s Jan 14 00:58:57.665: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437201924s Jan 14 00:58:59.775: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547165383s Jan 14 00:59:01.883: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655184846s Jan 14 00:59:03.992: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.763483516s Jan 14 00:59:06.100: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.87186544s Jan 14 00:59:08.210: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.981396852s Jan 14 00:59:10.318: INFO: Pod "exec-volume-test-dynamicpv-pvt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.089628708s [1mSTEP[0m: Saw pod success Jan 14 00:59:10.318: INFO: Pod "exec-volume-test-dynamicpv-pvt5" satisfied condition "Succeeded or Failed" Jan 14 00:59:10.426: INFO: Trying to get logs from node k8s-agentpool1-35908214-vmss000001 pod exec-volume-test-dynamicpv-pvt5 container exec-container-dynamicpv-pvt5: <nil> [1mSTEP[0m: delete the pod Jan 14 00:59:10.653: INFO: Waiting for pod exec-volume-test-dynamicpv-pvt5 to disappear Jan 14 00:59:10.760: INFO: Pod exec-volume-test-dynamicpv-pvt5 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-dynamicpv-pvt5 Jan 14 00:59:10.760: INFO: Deleting pod "exec-volume-test-dynamicpv-pvt5" in namespace "volume-3341" ... skipping 21 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] volumes [90mtest/e2e/storage/framework/testsuite.go:50[0m should allow exec of files on the volume [90mtest/e2e/storage/testsuites/volumes.go:198[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":34,"completed":16,"skipped":887,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (ext4)] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 190 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":35,"completed":12,"skipped":690,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy[0m [1m(OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents[0m [37mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m ... skipping 113 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [90mtest/e2e/storage/framework/testsuite.go:50[0m (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents [90mtest/e2e/storage/testsuites/fsgroupchangepolicy.go:216[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":34,"completed":17,"skipped":911,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow][0m [1mshould access to two volumes with the same volume mode and retain data across pod recreation on different node[0m [37mtest/e2e/storage/testsuites/multivolume.go:168[0m ... skipping 190 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] [90mtest/e2e/storage/framework/testsuite.go:50[0m should access to two volumes with the same volume mode and retain data across pod recreation on different node [90mtest/e2e/storage/testsuites/multivolume.go:168[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node","total":35,"completed":13,"skipped":766,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0mExternal Storage [Driver: test.csi.azure.com][0m [90m[Testpattern: Dynamic PV (default fs)] subPath[0m [1mshould support restarting containers using directory as subpath [Slow][0m [37mtest/e2e/storage/testsuites/subpath.go:322[0m ... skipping 65 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Dynamic PV (default fs)] subPath [90mtest/e2e/storage/framework/testsuite.go:50[0m should support restarting containers using directory as subpath [Slow] [90mtest/e2e/storage/testsuites/subpath.go:322[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]","total":35,"completed":14,"skipped":770,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] ... skipping 126 lines ... [It] should check snapshot fields, check restore correctly works, check deletion (ephemeral) test/e2e/storage/testsuites/snapshottable.go:177 Jan 14 01:05:23.960: INFO: Creating resource for dynamic PV Jan 14 01:05:23.960: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(test.csi.azure.com) supported size:{ 1Mi} [1mSTEP[0m: creating a StorageClass snapshotting-64-e2e-sc8tmnl [1mSTEP[0m: [init] starting a pod to use the claim Jan 14 01:05:24.179: INFO: Waiting up to 15m0s for pod "pvc-snapshottable-tester-m48vx" in namespace "snapshotting-64" to be "Succeeded or Failed" Jan 14 01:05:24.286: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 107.730128ms Jan 14 01:05:26.395: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216702655s Jan 14 01:05:28.505: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326084074s Jan 14 01:05:30.615: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436099419s Jan 14 01:05:32.723: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544634147s Jan 14 01:05:34.832: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653689265s ... skipping 2 lines ... Jan 14 01:05:41.160: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.981628403s Jan 14 01:05:43.273: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.093799601s Jan 14 01:05:45.381: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.202650692s Jan 14 01:05:47.490: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Pending", Reason="", readiness=false. Elapsed: 23.311092168s Jan 14 01:05:49.599: INFO: Pod "pvc-snapshottable-tester-m48vx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.419803496s [1mSTEP[0m: Saw pod success Jan 14 01:05:49.599: INFO: Pod "pvc-snapshottable-tester-m48vx" satisfied condition "Succeeded or Failed" [1mSTEP[0m: [init] checking the claim [1mSTEP[0m: creating a SnapshotClass [1mSTEP[0m: creating a dynamic VolumeSnapshot Jan 14 01:05:50.035: INFO: Waiting up to 5m0s for VolumeSnapshot snapshot-l6gfj to become ready Jan 14 01:05:50.143: INFO: VolumeSnapshot snapshot-l6gfj found but is not ready. Jan 14 01:05:52.252: INFO: VolumeSnapshot snapshot-l6gfj found but is not ready. ... skipping 40 lines ... Jan 14 01:06:54.173: INFO: volumesnapshotcontents snapcontent-5a462d6f-ec96-4dd9-ad6a-8d0baba0808e has been found and is not deleted Jan 14 01:06:55.282: INFO: volumesnapshotcontents snapcontent-5a462d6f-ec96-4dd9-ad6a-8d0baba0808e has been found and is not deleted Jan 14 01:06:56.391: INFO: volumesnapshotcontents snapcontent-5a462d6f-ec96-4dd9-ad6a-8d0baba0808e has been found and is not deleted Jan 14 01:06:57.500: INFO: volumesnapshotcontents snapcontent-5a462d6f-ec96-4dd9-ad6a-8d0baba0808e has been found and is not deleted Jan 14 01:06:58.609: INFO: volumesnapshotcontents snapcontent-5a462d6f-ec96-4dd9-ad6a-8d0baba0808e has been found and is not deleted Jan 14 01:06:59.718: INFO: volumesnapshotcontents snapcontent-5a462d6f-ec96-4dd9-ad6a-8d0baba0808e has been found and is not deleted Jan 14 01:07:00.718: INFO: WaitUntil failed after reaching the timeout 30s [AfterEach] volume snapshot controller test/e2e/storage/testsuites/snapshottable.go:172 Jan 14 01:07:00.858: INFO: Pod restored-pvc-tester-qppln has the following logs: Jan 14 01:07:00.858: INFO: Deleting pod "restored-pvc-tester-qppln" in namespace "snapshotting-64" Jan 14 01:07:00.969: INFO: Wait up to 5m0s for pod "restored-pvc-tester-qppln" to be fully deleted Jan 14 01:07:33.185: INFO: deleting snapshot "snapshotting-64"/"snapshot-l6gfj" ... skipping 26 lines ... [90mtest/e2e/storage/testsuites/snapshottable.go:113[0m [90mtest/e2e/storage/testsuites/snapshottable.go:176[0m should check snapshot fields, check restore correctly works, check deletion (ephemeral) [90mtest/e2e/storage/testsuites/snapshottable.go:177[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)","total":35,"completed":15,"skipped":902,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] test/e2e/storage/framework/testsuite.go:51 Jan 14 01:07:41.105: INFO: Distro debian doesn't support ntfs -- skipping ... skipping 125 lines ... [90mtest/e2e/storage/external/external.go:174[0m [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral [90mtest/e2e/storage/framework/testsuite.go:50[0m should create read/write inline ephemeral volume [90mtest/e2e/storage/testsuites/ephemeral.go:196[0m [90m------------------------------[0m {"msg":"PASSED External Storage [Driver: test.csi.azure.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":35,"completed":16,"skipped":979,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m Jan 14 01:09:21.973: INFO: Running AfterSuite actions on all nodes Jan 14 01:09:21.973: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Jan 14 01:09:21.973: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 ... skipping 15 lines ... Jan 14 01:09:22.005: INFO: Running AfterSuite actions on node 1 [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mExternal Storage [Driver: test.csi.azure.com] [0m[0m[Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] [0m[91m[1m[Measurement] should access to two volumes with the same volume mode and retain data across pod recreation on different node [0m [37mtest/e2e/storage/testsuites/multivolume.go:497[0m [1m[91mRan 89 of 7227 Specs in 2883.535 seconds[0m [1m[91mFAIL![0m -- [32m[1m88 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m7138 Skipped[0m Ginkgo ran 1 suite in 48m7.118128939s Test Suite Failed + print_logs + sed -i s/disk.csi.azure.com/test.csi.azure.com/g deploy/example/storageclass-azuredisk-csi.yaml + bash ./hack/verify-examples.sh linux azurepubliccloud ephemeral test begin to create deployment examples ... storageclass.storage.k8s.io/managed-csi created Applying config "deploy/example/deployment.yaml" ... skipping 101 lines ... I0114 00:21:04.003468 1 reflector.go:257] Listing and watching *v1beta2.AzVolumeAttachment from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132 I0114 00:21:04.002920 1 reflector.go:221] Starting reflector *v1beta2.AzVolume (30s) from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132 I0114 00:21:04.003694 1 reflector.go:257] Listing and watching *v1beta2.AzVolume from sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132 I0114 00:21:04.103231 1 shared_informer.go:303] caches populated I0114 00:21:04.103262 1 azuredisk_v2.go:225] driver userAgent: test.csi.azure.com/latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 e2e-test I0114 00:21:04.103270 1 azure_disk_utils.go:564] reading cloud config from secret kube-system/azure-cloud-provider I0114 00:21:04.105340 1 azure_disk_utils.go:571] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0114 00:21:04.105355 1 azure_disk_utils.go:576] could not read cloud config from secret kube-system/azure-cloud-provider I0114 00:21:04.105360 1 azure_disk_utils.go:586] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0114 00:21:04.105379 1 azure_disk_utils.go:594] read cloud config from file: /etc/kubernetes/azure.json successfully I0114 00:21:04.105925 1 azure_auth.go:253] Using AzurePublicCloud environment I0114 00:21:04.105938 1 azure_auth.go:104] azure: using managed identity extension to retrieve access token I0114 00:21:04.105942 1 azure_auth.go:110] azure: using User Assigned MSI ID to retrieve access token ... skipping 29 lines ... I0114 00:21:04.106296 1 azure_vmasclient.go:73] Azure AvailabilitySetsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0114 00:21:04.106335 1 azure.go:1024] attach/detach disk operation rate limiter configuration - Enabled: true QPS: 1.333000, Bucket: 240 I0114 00:21:04.106361 1 azuredisk_v2.go:248] disable UseInstanceMetadata for controller I0114 00:21:04.106370 1 azuredisk_v2.go:258] DisableAvailabilitySetNodes for controller since current VMType is vmss I0114 00:21:04.106378 1 azuredisk_v2.go:264] cloud: AzurePublicCloud, location: westeurope, rg: kubetest-rpwnaldb, VMType: vmss, PrimaryScaleSetName: k8s-agentpool1-35908214-vmss, PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: true I0114 00:21:04.106382 1 skus.go:123] NewNodeInfo: Starting to populate node and disk sku information. E0114 00:21:04.106397 1 azuredisk_v2.go:272] Failed to get node info. Error: NewNodeInfo: Failed to get instance type from Azure cloud provider, nodeName: , error: not a vmss instance I0114 00:21:04.106468 1 mount_linux.go:208] Detected OS without systemd I0114 00:21:04.106478 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME I0114 00:21:04.106483 1 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I0114 00:21:04.106486 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT I0114 00:21:04.106490 1 driver.go:81] Enabling controller service capability: CLONE_VOLUME I0114 00:21:04.106493 1 driver.go:81] Enabling controller service capability: EXPAND_VOLUME ... skipping 144 lines ... I0114 00:21:04.304728 1 shared_state.go:490] "msg"="Storing pod csi-test-node-bpnc8 and claim [] to podToClaimsMap map." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.304747 1 pod.go:91] "msg"="Creating replicas for pod kube-system/csi-test-node-bpnc8." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" I0114 00:21:04.304762 1 shared_state.go:314] "msg"="Getting requested volumes for pod (kube-system/csi-test-node-bpnc8)." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" I0114 00:21:04.304784 1 pod.go:99] "msg"="Pod kube-system/csi-test-node-bpnc8 has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" I0114 00:21:04.304807 1 pod.go:89] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" "latency"=55100 I0114 00:21:04.304827 1 shared_state.go:420] "msg"="Adding pod csi-test-controller-d569d59d4-rg7xg to shared map with keyName kube-system/csi-test-controller-d569d59d4-rg7xg." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305282 1 shared_state.go:426] "msg"="Pod spec of pod csi-test-controller-d569d59d4-rg7xg is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-controller-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-rwjd8 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=30s --leader-election --leader-election-namespace=kube-system --worker-threads=100 --extra-create-metadata=true --strict-topology=true --kube-api-qps=50 --kube-api-burst=100] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500 -kube-api-qps=50 -kube-api-burst=100] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 Command:[] Args:[--v=5 --config=/etc/csi-test-controller/config.yaml] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:csi-test-controller-config ReadOnly:false MountPath:/etc/csi-test-controller SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000715100 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-35908214-vmss000001 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000715108} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000715110}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000715118 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc00071511c PreemptionPolicy:0xc00099b350 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil HostUsers:<nil> SchedulingGates:[] ResourceClaims:[]}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-controller-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-rwjd8 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]" "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305321 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305350 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305378 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {csi-test-controller-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305417 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {kube-api-access-rwjd8 {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305431 1 shared_state.go:490] "msg"="Storing pod csi-test-controller-d569d59d4-rg7xg and claim [] to podToClaimsMap map." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" I0114 00:21:04.305453 1 pod.go:91] "msg"="Creating replicas for pod kube-system/csi-test-controller-d569d59d4-rg7xg." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-controller-d569d59d4-rg7xg" ... skipping 280 lines ... I0114 00:21:04.316780 1 pod.go:91] "msg"="Creating replicas for pod kube-system/cloud-node-manager-lshxw." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/cloud-node-manager-lshxw" I0114 00:21:04.316800 1 shared_state.go:314] "msg"="Getting requested volumes for pod (kube-system/cloud-node-manager-lshxw)." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/cloud-node-manager-lshxw" I0114 00:21:04.316816 1 pod.go:99] "msg"="Pod kube-system/cloud-node-manager-lshxw has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/cloud-node-manager-lshxw" I0114 00:21:04.316838 1 pod.go:89] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/cloud-node-manager-lshxw" "latency"=53300 I0114 00:21:04.316865 1 pod.go:150] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).Recover" "latency"=13320011 I0114 00:21:04.344736 1 azuredisk_v2.go:435] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="54defdf6-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).StartControllersAndDieOnExit.func1" "latency"=217586185 I0114 00:21:04.385618 1 node.go:90] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="node" "disk.csi.azure.com/node-name"="k8s-master-35908214-0" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" I0114 00:21:04.385637 1 node.go:90] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="node" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000002" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" I0114 00:21:04.385624 1 node.go:90] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="node" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" I0114 00:21:04.385656 1 node.go:90] AzDiskControllerManager "msg"="Node is now available. Will requeue failed replica creation requests." "controller"="node" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "namespace"="azure-disk-csi" "partition"="csi-azuredisk-controller" I0114 00:21:04.385684 1 shared_state.go:1205] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="55067077-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).tryCreateFailedReplicas" "latency"=9000 I0114 00:21:04.385688 1 shared_state.go:1205] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="550670fc-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*SharedState).tryCreateFailedReplicas" "latency"=9000 I0114 00:21:04.428864 1 shared_state.go:420] "msg"="Adding pod csi-test-node-bpnc8 to shared map with keyName kube-system/csi-test-node-bpnc8." I0114 00:21:04.429325 1 shared_state.go:426] "msg"="Pod spec of pod csi-test-node-bpnc8 is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/plugins/test.csi.azure.com,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:mountpoint-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:registration-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/plugins_registry/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:device-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/dev,Type:*Directory,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:sys-devices-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/sys/bus/scsi/devices,Type:*Directory,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:sys-class VolumeSource:{HostPath:&HostPathVolumeSource{Path:/sys/class/,Type:*Directory,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-node-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-node-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zgbvl VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29603 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zgbvl ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:node-driver-registrar Image:mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1 Command:[] Args:[--csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil} {Name:DRIVER_REG_SOCK_PATH Value:/var/lib/kubelet/plugins/test.csi.azure.com/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:registration-dir ReadOnly:false MountPath:/registration SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zgbvl ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/csi-node-driver-registrar --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) --mode=kubelet-registration-probe],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 Command:[] Args:[--v=6 --nodeid=$(KUBE_NODE_NAME) --config=/etc/csi-test-node/config.yaml] WorkingDir: Ports:[{Name:healthz HostPort:29603 ContainerPort:29603 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:KUBE_NODE_NAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:209715200 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:mountpoint-dir ReadOnly:false MountPath:/var/lib/kubelet/ SubPath: MountPropagation:0xc0011020a0 SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:device-dir ReadOnly:false MountPath:/dev SubPath: MountPropagation:<nil> SubPathExpr:} {Name:sys-devices-dir ReadOnly:false MountPath:/sys/bus/scsi/devices SubPath: MountPropagation:<nil> SubPathExpr:} {Name:sys-class ReadOnly:false MountPath:/sys/class/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:csi-test-node-config ReadOnly:false MountPath:/etc/csi-test-node SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-zgbvl ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc001074668 ActiveDeadlineSeconds:<nil> DNSPolicy:Default NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-node-sa DeprecatedServiceAccount:csi-azuredisk-node-sa AutomountServiceAccountToken:<nil> NodeName:k8s-master-35908214-0 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[k8s-master-35908214-0],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,} SchedulerName:default-scheduler Tolerations:[{Key: Operator:Exists Value: Effect: TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:<nil>} {Key:node.kubernetes.io/disk-pressure Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/memory-pressure Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/pid-pressure Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/unschedulable Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/network-unavailable Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>}] HostAliases:[] PriorityClassName:system-node-critical Priority:0xc001074670 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc001074674 PreemptionPolicy:0xc0011020c0 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil HostUsers:<nil> SchedulingGates:[] ResourceClaims:[]}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/plugins/test.csi.azure.com,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:mountpoint-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:registration-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/plugins_registry/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:device-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/dev,Type:*Directory,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:sys-devices-dir VolumeSource:{HostPath:&HostPathVolumeSource{Path:/sys/bus/scsi/devices,Type:*Directory,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:sys-class VolumeSource:{HostPath:&HostPathVolumeSource{Path:/sys/class/,Type:*Directory,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-node-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-node-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-zgbvl VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]" I0114 00:21:04.429374 1 shared_state.go:464] "msg"="Pod csi-test-node-bpnc8: Skipping Volume {socket-dir {&HostPathVolumeSource{Path:/var/lib/kubelet/plugins/test.csi.azure.com,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:04.429400 1 shared_state.go:464] "msg"="Pod csi-test-node-bpnc8: Skipping Volume {mountpoint-dir {&HostPathVolumeSource{Path:/var/lib/kubelet/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." ... skipping 7 lines ... I0114 00:21:04.429572 1 shared_state.go:490] "msg"="Storing pod csi-test-node-bpnc8 and claim [] to podToClaimsMap map." I0114 00:21:04.429599 1 pod.go:91] "msg"="Creating replicas for pod kube-system/csi-test-node-bpnc8." "disk.csi.azure.com/request-id"="550d249e-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" I0114 00:21:04.429614 1 shared_state.go:314] "msg"="Getting requested volumes for pod (kube-system/csi-test-node-bpnc8)." "disk.csi.azure.com/request-id"="550d249e-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" I0114 00:21:04.429624 1 pod.go:99] "msg"="Pod kube-system/csi-test-node-bpnc8 has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="550d249e-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" I0114 00:21:04.429647 1 pod.go:89] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="550d249e-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/csi-test-node-bpnc8" "latency"=52001 I0114 00:21:04.529187 1 shared_state.go:420] "msg"="Adding pod csi-test-controller-d569d59d4-rg7xg to shared map with keyName kube-system/csi-test-controller-d569d59d4-rg7xg." I0114 00:21:04.529557 1 shared_state.go:426] "msg"="Pod spec of pod csi-test-controller-d569d59d4-rg7xg is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-controller-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-rwjd8 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=30s --leader-election --leader-election-namespace=kube-system --worker-threads=100 --extra-create-metadata=true --strict-topology=true --kube-api-qps=50 --kube-api-burst=100] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500 -kube-api-qps=50 -kube-api-burst=100] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 Command:[] Args:[--v=5 --config=/etc/csi-test-controller/config.yaml] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:csi-test-controller-config ReadOnly:false MountPath:/etc/csi-test-controller SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc0002978c8 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-35908214-vmss000001 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc0002978e0} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc0002978e8}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc0002979f0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc0002979f4 PreemptionPolicy:0xc000e0e300 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil HostUsers:<nil> SchedulingGates:[] ResourceClaims:[]}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-controller-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-rwjd8 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]" I0114 00:21:04.529598 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:04.529617 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:04.529639 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {csi-test-controller-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:04.529665 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {kube-api-access-rwjd8 {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:04.529678 1 shared_state.go:490] "msg"="Storing pod csi-test-controller-d569d59d4-rg7xg and claim [] to podToClaimsMap map." I0114 00:21:04.628659 1 shared_state.go:420] "msg"="Adding pod kube-proxy-pdrw8 to shared map with keyName kube-system/kube-proxy-pdrw8." ... skipping 75 lines ... I0114 00:21:05.029974 1 shared_state.go:490] "msg"="Storing pod azure-ip-masq-agent-t8g8m and claim [] to podToClaimsMap map." I0114 00:21:05.030004 1 pod.go:91] "msg"="Creating replicas for pod kube-system/azure-ip-masq-agent-t8g8m." "disk.csi.azure.com/request-id"="5568c1db-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/azure-ip-masq-agent-t8g8m" I0114 00:21:05.030019 1 shared_state.go:314] "msg"="Getting requested volumes for pod (kube-system/azure-ip-masq-agent-t8g8m)." "disk.csi.azure.com/request-id"="5568c1db-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/azure-ip-masq-agent-t8g8m" I0114 00:21:05.030034 1 pod.go:99] "msg"="Pod kube-system/azure-ip-masq-agent-t8g8m has 0 volumes. Volumes: []" "disk.csi.azure.com/request-id"="5568c1db-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/azure-ip-masq-agent-t8g8m" I0114 00:21:05.030067 1 pod.go:89] "msg"="Workflow completed with success." "disk.csi.azure.com/request-id"="5568c1db-93a1-11ed-8c24-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/controller.(*ReconcilePod).createReplicas" "disk.csi/azure.com/pod-name"="kube-system/azure-ip-masq-agent-t8g8m" "latency"=67100 I0114 00:21:05.045914 1 shared_state.go:420] "msg"="Adding pod csi-test-controller-d569d59d4-rg7xg to shared map with keyName kube-system/csi-test-controller-d569d59d4-rg7xg." I0114 00:21:05.046691 1 shared_state.go:426] "msg"="Pod spec of pod csi-test-controller-d569d59d4-rg7xg is: {Volumes:[{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-controller-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-rwjd8 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}] InitContainers:[] Containers:[{Name:csi-provisioner Image:mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0 Command:[] Args:[--feature-gates=Topology=true --csi-address=$(ADDRESS) --v=2 --timeout=30s --leader-election --leader-election-namespace=kube-system --worker-threads=100 --extra-create-metadata=true --strict-topology=true --kube-api-qps=50 --kube-api-burst=100] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-attacher Image:mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0 Command:[] Args:[-v=2 -csi-address=$(ADDRESS) -timeout=600s -leader-election --leader-election-namespace=kube-system -worker-threads=500 -kube-api-qps=50 -kube-api-burst=100] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-snapshotter Image:mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1 Command:[] Args:[-csi-address=$(ADDRESS) -leader-election --leader-election-namespace=kube-system -v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:csi-resizer Image:mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0 Command:[] Args:[-csi-address=$(ADDRESS) -v=2 -leader-election --leader-election-namespace=kube-system -handle-volume-inuse-error=false -feature-gates=RecoverVolumeExpansionFailure=true -timeout=240s] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ADDRESS Value:/csi/csi.sock ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:liveness-probe Image:mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0 Command:[] Args:[--csi-address=/csi/csi.sock --probe-timeout=3s --health-port=29602 --v=2] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} {Name:azuredisk Image:k8sprow.azurecr.io/azuredisk-csi:latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 Command:[] Args:[--v=5 --config=/etc/csi-test-controller/config.yaml] WorkingDir: Ports:[{Name:healthz HostPort:29602 ContainerPort:29602 Protocol:TCP HostIP:} {Name:metrics HostPort:29604 ContainerPort:29604 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:AZURE_CREDENTIAL_FILE Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:azure-cred-file,},Key:path,Optional:*true,},SecretKeyRef:nil,}} {Name:CSI_ENDPOINT Value:unix:///csi/csi.sock ValueFrom:nil} {Name:AZURE_GO_SDK_LOG_LEVEL Value: ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI} memory:{i:{value:20971520 scale:0} d:{Dec:<nil>} s:20Mi Format:BinarySI}] Claims:[]} VolumeMounts:[{Name:socket-dir ReadOnly:false MountPath:/csi SubPath: MountPropagation:<nil> SubPathExpr:} {Name:azure-cred ReadOnly:false MountPath:/etc/kubernetes/ SubPath: MountPropagation:<nil> SubPathExpr:} {Name:csi-test-controller-config ReadOnly:false MountPath:/etc/csi-test-controller SubPath: MountPropagation:<nil> SubPathExpr:} {Name:kube-api-access-rwjd8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil> SubPathExpr:}] VolumeDevices:[] LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} ReadinessProbe:nil StartupProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false}] EphemeralContainers:[] RestartPolicy:Always TerminationGracePeriodSeconds:0xc000b384b8 ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[kubernetes.io/os:linux] ServiceAccountName:csi-azuredisk-controller-sa DeprecatedServiceAccount:csi-azuredisk-controller-sa AutomountServiceAccountToken:<nil> NodeName:k8s-agentpool1-35908214-vmss000001 HostNetwork:true HostPID:false HostIPC:false ShareProcessNamespace:<nil> SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName:default-scheduler Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/controlplane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node-role.kubernetes.io/control-plane Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>} {Key:node.kubernetes.io/not-ready Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000b384c0} {Key:node.kubernetes.io/unreachable Operator:Exists Value: Effect:NoExecute TolerationSeconds:0xc000b384c8}] HostAliases:[] PriorityClassName:system-cluster-critical Priority:0xc000b384d0 DNSConfig:nil ReadinessGates:[] RuntimeClassName:<nil> EnableServiceLinks:0xc000b384d4 PreemptionPolicy:0xc001090d00 Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:<nil> OS:nil HostUsers:<nil> SchedulingGates:[] ResourceClaims:[]}. With volumes: [{Name:socket-dir VolumeSource:{HostPath:nil EmptyDir:&EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:azure-cred VolumeSource:{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:csi-test-controller-config VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:nil PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}} {Name:kube-api-access-rwjd8 VolumeSource:{HostPath:nil EmptyDir:nil GCEPersistentDisk:nil AWSElasticBlockStore:nil GitRepo:nil Secret:nil NFS:nil ISCSI:nil Glusterfs:nil PersistentVolumeClaim:nil RBD:nil FlexVolume:nil Cinder:nil CephFS:nil Flocker:nil DownwardAPI:nil FC:nil AzureFile:nil ConfigMap:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} PortworxVolume:nil ScaleIO:nil StorageOS:nil CSI:nil Ephemeral:nil}}]" I0114 00:21:05.046752 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {socket-dir {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:05.046788 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {azure-cred {&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:05.046823 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {csi-test-controller-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:csi-test-controller-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:05.046879 1 shared_state.go:464] "msg"="Pod csi-test-controller-d569d59d4-rg7xg: Skipping Volume {kube-api-access-rwjd8 {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,} nil nil nil nil nil}}. No persistent volume exists." I0114 00:21:05.046894 1 shared_state.go:490] "msg"="Storing pod csi-test-controller-d569d59d4-rg7xg and claim [] to podToClaimsMap map." I0114 00:21:05.046928 1 pod.go:91] "msg"="Creating replicas for pod kube-system/csi-test-controller-d569d59d4-rg7xg." "disk.csi.azure.com/request-id"="556b56fd-93a1-11ed-8c24-6045bd9ae695" "disk.csi/azure.com/pod-name"="kube-system/csi-test-controller-d569d59d4-rg7xg" ... skipping 31966 lines ... id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-b9233c7c-56a8-45a2-90be-0266b0c4a196","volume_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-b9233c7c-56a8-45a2-90be-0266b0c4a196/dev/1478dd54-b400-40a3-96a8-7e20b63cd1ec"} I0114 00:30:46.179847 1 utils.go:85] GRPC response: {"usage":[{"total":5368709120,"unit":1}]} I0114 00:30:47.000839 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:30:47.000903 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7bef0195-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-624e8acc-83de-43c4-8991-47cd3ee66633" "latency"=87840855990 I0114 00:30:47.000926 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7bef0195-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=87840964690 I0114 00:30:47.467710 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:30:47.467726 1 utils.go:79] GRPC request: {} I0114 00:30:47.467757 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:30:47.470324 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 35 lines ... I0114 00:30:57.223847 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnstageVolume I0114 00:30:57.223860 1 utils.go:79] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-624e8acc-83de-43c4-8991-47cd3ee66633","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-624e8acc-83de-43c4-8991-47cd3ee66633"} I0114 00:30:57.223895 1 nodeserver_v2.go:257] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-624e8acc-83de-43c4-8991-47cd3ee66633 W0114 00:30:57.224182 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-624e8acc-83de-43c4-8991-47cd3ee66633" is not a mountpoint, deleting I0114 00:30:57.224228 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-624e8acc-83de-43c4-8991-47cd3ee66633 successfully I0114 00:30:57.224238 1 utils.go:85] GRPC response: {} I0114 00:30:57.287885 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:30:57.287957 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="b09254b4-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-78de2432-e1e8-4d3a-b540-3d2095ff6079" "latency"=9816025080 I0114 00:30:57.287989 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b09254b4-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9816130680 I0114 00:30:57.881361 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:30:57.881397 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:30:57.881423 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78de2432-e1e8-4d3a-b540-3d2095ff6079/globalmount with mount options([]) I0114 00:30:57.881435 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 119 lines ... I0114 00:31:40.964281 1 utils.go:79] GRPC request: {} I0114 00:31:40.964312 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:31:59.186386 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:59.186407 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:59.186506 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:59.186533 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:59.186553 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:31:59.254328 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:59.286596 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:31:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="db6062f8-93a2-11ed-88b1-6045bd9ae695" I0114 00:31:59.291168 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 4 milliseconds I0114 00:31:59.291376 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="db6062f8-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=4789904 I0114 00:32:10.963561 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:32:10.963580 1 utils.go:79] GRPC request: {} I0114 00:32:10.963625 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:32:13.363365 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:32:13.363422 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="cb24a12e-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a" "latency"=41311932726 I0114 00:32:13.363448 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="cb24a12e-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=41312048526 I0114 00:32:15.158850 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:32:15.158899 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:32:15.158932 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a/globalmount with mount options([nouuid]) I0114 00:32:15.158946 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 106 lines ... I0114 00:32:35.455523 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a/globalmount /var/lib/kubelet/pods/edd9e1a7-2f65-4481-9f1c-21482eb153ca/volumes/kubernetes.io~csi/pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a/mount) I0114 00:32:35.456300 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a/globalmount at /var/lib/kubelet/pods/edd9e1a7-2f65-4481-9f1c-21482eb153ca/volumes/kubernetes.io~csi/pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a/mount successfully I0114 00:32:35.456316 1 utils.go:85] GRPC response: {} I0114 00:32:40.964070 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:32:40.964089 1 utils.go:79] GRPC request: {} I0114 00:32:40.964128 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:32:42.553048 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:32:42.553104 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="efc082c9-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-fe129bf6-ad90-47c6-972a-72be345120bd" "latency"=9082065072 I0114 00:32:42.553137 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="efc082c9-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9082174472 I0114 00:32:43.137300 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:32:43.137340 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:32:43.137365 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fe129bf6-ad90-47c6-972a-72be345120bd/globalmount with mount options([]) I0114 00:32:43.137373 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 81 lines ... I0114 00:32:58.583817 1 utils.go:85] GRPC response: {} I0114 00:32:59.181346 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzVolumeAttachment total 167 items received I0114 00:32:59.187474 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:59.187491 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:59.187509 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:59.187526 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:59.187554 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:32:59.187580 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:32:59.189170 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azvolumeattachments?allowWatchBookmarks=true&resourceVersion=9799&timeout=8m25s&timeoutSeconds=505&watch=true 200 OK in 7 milliseconds I0114 00:32:59.255387 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:59.286795 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:32:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ff23afdd-93a2-11ed-88b1-6045bd9ae695" I0114 00:32:59.292649 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:32:59.292838 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ff23afdd-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6096905 I0114 00:33:08.022446 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:33:08.022514 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f5a8326a-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-4ec4e10d-01dd-40ca-8623-ce63ed3ffe3f" "latency"=24644495952 I0114 00:33:08.022548 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="f5a8326a-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=24644602052 I0114 00:33:08.025170 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:33:08.025226 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f5b7da8a-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ced8e384-a073-4a77-9a2d-8f6a9ef1856e" "latency"=24544610366 I0114 00:33:08.025258 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="f5b7da8a-93a2-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=24544703766 I0114 00:33:08.627590 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sde under /dev/disk/azure/scsi1/ I0114 00:33:08.627644 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:33:08.627680 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun3 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4ec4e10d-01dd-40ca-8623-ce63ed3ffe3f/globalmount with mount options([]) I0114 00:33:08.627699 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun3" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun3]) ... skipping 128 lines ... I0114 00:33:25.952665 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e85aa226-23fa-46c0-9dd5-fd7740b40d7a/globalmount successfully I0114 00:33:25.952678 1 utils.go:85] GRPC response: {} I0114 00:33:29.188127 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:29.188151 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:29.188168 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:29.188201 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:29.188273 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:33:29.255466 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:29.286713 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:33:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="11054fb7-93a3-11ed-88b1-6045bd9ae695" I0114 00:33:29.293251 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 6 milliseconds I0114 00:33:29.293839 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="11054fb7-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7173406 I0114 00:33:40.965344 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:33:40.965367 1 utils.go:79] GRPC request: {} I0114 00:33:40.965413 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:33:48.927791 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:33:48.927859 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="0df3d980-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-30fef56c-4f70-4aeb-9157-b8f6bb80ff4d" "latency"=24788717275 I0114 00:33:48.927888 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="0df3d980-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=24788824975 I0114 00:33:49.527327 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun4 by sdd under /dev/disk/azure/scsi1/ I0114 00:33:49.527385 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun4. perfProfile none accountType I0114 00:33:49.527416 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun4 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-30fef56c-4f70-4aeb-9157-b8f6bb80ff4d/globalmount with mount options([]) I0114 00:33:49.527427 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun4" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun4]) ... skipping 68 lines ... I0114 00:33:54.973586 1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-30fef56c-4f70-4aeb-9157-b8f6bb80ff4d","volume_path":"/var/lib/kubelet/pods/a47b2b2a-57e9-47f0-ad31-d5e2dcf1ea94/volumes/kubernetes.io~csi/pvc-30fef56c-4f70-4aeb-9157-b8f6bb80ff4d/mount"} I0114 00:33:54.973662 1 utils.go:85] GRPC response: {"usage":[{"available":5179580416,"total":5196382208,"unit":1,"used":24576},{"available":327669,"total":327680,"unit":2,"used":11}]} I0114 00:33:59.190348 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:59.190394 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:59.190439 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:59.190480 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:59.190553 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:33:59.190576 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:33:59.190591 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:33:59.255909 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:59.286152 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:33:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="22e6dd27-93a3-11ed-88b1-6045bd9ae695" I0114 00:33:59.291639 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:33:59.291843 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="22e6dd27-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5731801 I0114 00:34:07.169664 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 60 items received I0114 00:34:07.171932 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azdrivernodes?allowWatchBookmarks=true&resourceVersion=10529&timeout=9m17s&timeoutSeconds=557&watch=true 200 OK in 2 milliseconds ... skipping 22 lines ... I0114 00:34:27.246444 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 23 items received I0114 00:34:27.252678 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dk8s-agentpool1-35908214-vmss000001&resourceVersion=10741&timeout=7m56s&timeoutSeconds=476&watch=true 200 OK in 6 milliseconds I0114 00:34:29.190655 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:29.190722 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:29.190756 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:29.190807 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:29.190839 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:29.190870 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:29.190892 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:29.256061 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:29.286276 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:34:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="34c88530-93a3-11ed-88b1-6045bd9ae695" I0114 00:34:29.292059 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:34:29.292747 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="34c88530-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6493905 I0114 00:34:40.963900 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:34:40.963918 1 utils.go:79] GRPC request: {} I0114 00:34:40.963951 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:34:59.190789 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:59.190864 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:59.190897 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:59.190923 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:59.190991 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:59.191027 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:59.191039 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:59.256183 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:59.286522 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:34:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="46aa31c6-93a3-11ed-88b1-6045bd9ae695" I0114 00:34:59.292020 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:34:59.292190 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="46aa31c6-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5681904 I0114 00:35:04.760553 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:35:04.760633 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="1d40b5b2-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8b103945-ab42-494b-a9d8-ffeb9774f208" "latency"=74951938388 I0114 00:35:04.760672 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1d40b5b2-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=74952065288 I0114 00:35:04.761095 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:35:04.761144 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="1d40c1e4-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-173ad0dc-8bb2-41c7-96bf-09c168a34789" "latency"=74952159987 I0114 00:35:04.761173 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1d40c1e4-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=74952265788 I0114 00:35:04.763674 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:35:04.763725 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="1d31ba9b-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-3f86869e-98b6-4ea0-8d22-72cc9e65eaa3" "latency"=75053116041 I0114 00:35:04.763760 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1d31ba9b-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=75053340941 I0114 00:35:05.332676 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:35:05.332698 1 utils.go:79] GRPC request: {} I0114 00:35:05.332733 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:35:05.335318 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 161 lines ... I0114 00:35:19.719661 1 nodeserver_v2.go:257] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-173ad0dc-8bb2-41c7-96bf-09c168a34789/globalmount I0114 00:35:19.719686 1 mount_helper_common.go:99] "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-173ad0dc-8bb2-41c7-96bf-09c168a34789/globalmount" is a mountpoint, unmounting I0114 00:35:19.719696 1 mount_linux.go:294] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-173ad0dc-8bb2-41c7-96bf-09c168a34789/globalmount W0114 00:35:19.731344 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-173ad0dc-8bb2-41c7-96bf-09c168a34789/globalmount" is not a mountpoint, deleting I0114 00:35:19.731400 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-173ad0dc-8bb2-41c7-96bf-09c168a34789/globalmount successfully I0114 00:35:19.731414 1 utils.go:85] GRPC response: {} I0114 00:35:25.069867 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:35:25.069940 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="4a456bdc-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a74a730a-37f1-4fed-998e-12ddcc5ed8d2" "latency"=19732881571 I0114 00:35:25.069981 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="4a456bdc-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19733030771 I0114 00:35:25.657579 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdg under /dev/disk/azure/scsi1/ I0114 00:35:25.657618 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:35:25.657641 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a74a730a-37f1-4fed-998e-12ddcc5ed8d2/globalmount with mount options([]) I0114 00:35:25.657650 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 65 lines ... I0114 00:35:57.295785 1 utils.go:79] GRPC request: {} I0114 00:35:57.295811 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:35:57.296411 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:35:57.296426 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0f742ef5-70f7-426c-be54-bc767d9549de/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext3"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0f742ef5-70f7-426c-be54-bc767d9549de","csi.storage.k8s.io/pvc/name":"test.csi.azure.comkgdm6","csi.storage.k8s.io/pvc/namespace":"volume-7372","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0f742ef5-70f7-426c-be54-bc767d9549de"} I0114 00:35:57.296624 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-0f742ef5-70f7-426c-be54-bc767d9549de-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:35:59.192118 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:35:59.192227 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:35:59.192242 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:35:59.192271 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:35:59.192282 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:35:59.257364 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:35:59.285978 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:35:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="6a6d6170-93a3-11ed-88b1-6045bd9ae695" I0114 00:35:59.292618 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 6 milliseconds I0114 00:35:59.292798 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="6a6d6170-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6879705 I0114 00:36:10.964037 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:36:10.964056 1 utils.go:79] GRPC request: {} I0114 00:36:10.964097 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:36:21.626114 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:36:21.626180 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="693dd648-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0f742ef5-70f7-426c-be54-bc767d9549de" "latency"=24329477345 I0114 00:36:21.626212 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="693dd648-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=24329596545 I0114 00:36:23.047518 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:36:23.047532 1 utils.go:79] GRPC request: {} I0114 00:36:23.047560 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:36:23.050227 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 38 lines ... I0114 00:36:24.579620 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0f742ef5-70f7-426c-be54-bc767d9549de/globalmount at /var/lib/kubelet/pods/0410e4d4-274e-4566-8010-f7caf90c9a72/volumes/kubernetes.io~csi/pvc-0f742ef5-70f7-426c-be54-bc767d9549de/mount successfully I0114 00:36:24.579634 1 utils.go:85] GRPC response: {} I0114 00:36:29.193044 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:29.193071 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:29.193094 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:29.193125 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:29.193204 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:36:29.257497 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:29.286754 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:36:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7c4f2412-93a3-11ed-88b1-6045bd9ae695" I0114 00:36:29.292803 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:36:29.292992 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7c4f2412-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6254205 I0114 00:36:31.183545 1 reflector.go:559] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.Node total 109 items received I0114 00:36:31.187067 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=12078&timeout=8m47s&timeoutSeconds=527&watch=true 200 OK in 3 milliseconds I0114 00:36:31.907455 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:36:31.907514 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7897c286-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0eed43ec-9c9c-4b6a-b9fe-03593755ff21" "latency"=8855678152 I0114 00:36:31.907544 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7897c286-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=8855789952 I0114 00:36:32.317775 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:36:32.317802 1 utils.go:79] GRPC request: {} I0114 00:36:32.317835 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:36:32.318506 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 147 lines ... I0114 00:36:53.820754 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0f742ef5-70f7-426c-be54-bc767d9549de/globalmount successfully I0114 00:36:53.820768 1 utils.go:85] GRPC response: {} I0114 00:36:59.194085 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:59.194111 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:59.194116 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:59.194087 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:59.194247 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:36:59.194277 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:36:59.258470 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:59.286797 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:36:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="8e30c59c-93a3-11ed-88b1-6045bd9ae695" I0114 00:36:59.293706 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 6 milliseconds I0114 00:36:59.294279 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="8e30c59c-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7591407 I0114 00:37:02.342217 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:37:02.342274 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7e1de432-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336" "latency"=30022775123 I0114 00:37:02.342303 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7e1de432-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=30022896123 I0114 00:37:02.342997 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:37:02.343035 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7e3cd43d-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-6d85fab6-ce9c-413a-8b2c-983f5d361196" "latency"=29820820541 I0114 00:37:02.343066 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7e3cd43d-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29820900941 I0114 00:37:02.932771 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sde under /dev/disk/azure/scsi1/ I0114 00:37:02.932810 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:37:02.932833 1 utils.go:85] GRPC response: {} I0114 00:37:02.933062 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sdf under /dev/disk/azure/scsi1/ ... skipping 95 lines ... I0114 00:37:19.805552 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /dev/disk/azure/scsi1/lun1 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336/f47708a5-af2c-42d5-8ed9-ac556ddc0763) I0114 00:37:19.806895 1 nodeserver_v2.go:353] NodePublishVolume: mount /dev/disk/azure/scsi1/lun1 at /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336/f47708a5-af2c-42d5-8ed9-ac556ddc0763 successfully I0114 00:37:19.806917 1 utils.go:85] GRPC response: {} I0114 00:37:29.194644 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:37:29.194714 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:37:29.194787 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:37:29.194831 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:37:29.194839 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:37:29.259020 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:37:29.286226 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:37:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="a012550a-93a3-11ed-88b1-6045bd9ae695" I0114 00:37:29.294884 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 8 milliseconds I0114 00:37:29.295060 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="a012550a-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=8868609 I0114 00:37:31.574475 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnpublishVolume ... skipping 10 lines ... I0114 00:37:31.637649 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnstageVolume I0114 00:37:31.637850 1 utils.go:79] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336"} I0114 00:37:31.637909 1 nodeserver_v2.go:257] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336 W0114 00:37:31.638198 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336" is not a mountpoint, deleting I0114 00:37:31.638256 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-2d19ea1d-6cf8-442d-8ce9-4017a1ede336 successfully I0114 00:37:31.638280 1 utils.go:85] GRPC response: {} I0114 00:37:38.039179 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:37:38.039242 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="90a45708-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-13ca5559-964a-471f-990d-4c2a1a1b6f48" "latency"=34639641372 I0114 00:37:38.039275 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="90a45708-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=34639756572 I0114 00:37:38.629000 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun4 by sdd under /dev/disk/azure/scsi1/ I0114 00:37:38.629041 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun4. perfProfile none accountType I0114 00:37:38.629052 1 utils.go:85] GRPC response: {} I0114 00:37:38.641553 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 51 lines ... I0114 00:37:59.719367 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:37:59.719392 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6f18bc18-2128-4919-9814-db21f964d153/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-6f18bc18-2128-4919-9814-db21f964d153","csi.storage.k8s.io/pvc/name":"test.csi.azure.comrrgr8","csi.storage.k8s.io/pvc/namespace":"provisioning-8094","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-6f18bc18-2128-4919-9814-db21f964d153"} I0114 00:37:59.719637 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-6f18bc18-2128-4919-9814-db21f964d153-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:38:10.964702 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:38:10.964724 1 utils.go:79] GRPC request: {} I0114 00:38:10.964772 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:38:19.695153 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:38:19.695235 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="b23619f2-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-6f18bc18-2128-4919-9814-db21f964d153" "latency"=19975504883 I0114 00:38:19.695277 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b23619f2-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19975683383 I0114 00:38:20.162024 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:38:20.162044 1 utils.go:79] GRPC request: {} I0114 00:38:20.162090 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:38:20.165095 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 66 lines ... I0114 00:38:27.531560 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6f18bc18-2128-4919-9814-db21f964d153/globalmount successfully I0114 00:38:27.531573 1 utils.go:85] GRPC response: {} I0114 00:38:29.195770 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:29.195794 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:29.195812 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:29.195835 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:29.195874 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:38:29.195888 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:38:29.260196 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:29.286404 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:38:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="c3d5a1fc-93a3-11ed-88b1-6045bd9ae695" I0114 00:38:29.291406 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 4 milliseconds I0114 00:38:29.291705 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="c3d5a1fc-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5323204 I0114 00:38:29.977185 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:38:29.977247 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="bf0eb4e7-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9cab9f95-2a1e-4053-9b32-94e971f429c5" "latency"=8705366895 I0114 00:38:29.977281 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="bf0eb4e7-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=8705479695 I0114 00:38:29.984671 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:38:29.984730 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="be66130e-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0313d2f1-ffe3-4f75-9aa9-33cecec75a80" "latency"=9818016894 I0114 00:38:29.984756 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="be66130e-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9818109494 I0114 00:38:30.578033 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sde under /dev/disk/azure/scsi1/ I0114 00:38:30.578083 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:38:30.578098 1 utils.go:85] GRPC response: {} I0114 00:38:30.578397 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ ... skipping 87 lines ... I0114 00:38:49.475363 1 utils.go:79] GRPC request: {} I0114 00:38:49.475418 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:38:49.476060 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:38:49.476071 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"3"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4gsbh","csi.storage.k8s.io/pvc/namespace":"multivolume-7841","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef"} I0114 00:38:49.476248 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:38:59.199091 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:59.199201 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:38:59.199224 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:59.199244 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:59.199248 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:59.260653 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:59.286887 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:38:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="d5b75770-93a3-11ed-88b1-6045bd9ae695" I0114 00:38:59.299222 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 12 milliseconds I0114 00:38:59.299470 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="d5b75770-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=12589312 I0114 00:39:10.963779 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:39:10.963796 1 utils.go:79] GRPC request: {} I0114 00:39:10.963829 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:39:27.050826 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:39:27.050880 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="cfde5c7e-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef" "latency"=37574561455 I0114 00:39:27.050907 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="cfde5c7e-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=37574681055 I0114 00:39:27.794106 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:39:27.794131 1 utils.go:79] GRPC request: {} I0114 00:39:27.794167 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:39:27.794897 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 11 lines ... I0114 00:39:28.816371 1 utils.go:78] GRPC call: /csi.v1.Node/NodePublishVolume I0114 00:39:28.816384 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"3"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef","target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef/fbd68e70-4867-4f34-9099-81e61244d6a8","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4gsbh","csi.storage.k8s.io/pvc/namespace":"multivolume-7841","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef"} I0114 00:39:29.199340 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:29.199358 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:29.199383 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:29.199403 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:29.199452 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:39:29.261660 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:29.286911 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:39:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="e798fc3f-93a3-11ed-88b1-6045bd9ae695" I0114 00:39:29.293626 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 6 milliseconds I0114 00:39:29.293772 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="e798fc3f-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6894314 I0114 00:39:30.650005 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sdc under /dev/disk/azure/scsi1/ I0114 00:39:30.650049 1 nodeserver_v2.go:330] NodePublishVolume [block]: found device path /dev/disk/azure/scsi1/lun3 with LUN 3 ... skipping 7 lines ... I0114 00:39:40.963653 1 utils.go:79] GRPC request: {} I0114 00:39:40.963698 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:39:59.199457 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:59.199500 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:59.199523 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:59.199509 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:59.199638 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:39:59.261743 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:59.286049 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:39:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f97a7d9d-93a3-11ed-88b1-6045bd9ae695" I0114 00:39:59.297877 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 11 milliseconds I0114 00:39:59.298164 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f97a7d9d-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=12133210 I0114 00:40:02.589724 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:40:02.589777 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="e6b57275-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-aa7e9178-fa6d-43bc-b484-c2689dd37b80" "latency"=34793998071 I0114 00:40:02.589804 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e6b57275-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=34794116571 I0114 00:40:03.185087 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:40:03.185129 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:40:03.185142 1 utils.go:85] GRPC response: {} I0114 00:40:03.194712 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 52 lines ... I0114 00:40:12.353130 1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-aa7e9178-fa6d-43bc-b484-c2689dd37b80","volume_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-aa7e9178-fa6d-43bc-b484-c2689dd37b80/dev/a03fb5aa-fa54-4a01-bd97-23e3ae245174"} I0114 00:40:12.353725 1 utils.go:85] GRPC response: {"usage":[{"total":5368709120,"unit":1}]} I0114 00:40:29.200181 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:29.200214 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:29.200247 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:29.200267 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:29.200274 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:40:29.200291 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:40:29.200333 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:40:29.262660 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:29.286902 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:40:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="0b5c41d6-93a4-11ed-88b1-6045bd9ae695" I0114 00:40:29.298938 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 11 milliseconds I0114 00:40:29.299419 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="0b5c41d6-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=12487911 I0114 00:40:33.237165 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:40:33.237227 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="fc49dd57-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9f868a85-8d57-494b-83c6-fd4f85bda9ae" "latency"=29236632765 I0114 00:40:33.237248 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="fc49dd57-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29236760065 I0114 00:40:33.241152 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:40:33.241192 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="fc1bba38-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-dd2c98f0-5dd0-481d-a406-3063301c425a" "latency"=29542951159 I0114 00:40:33.241210 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="fc1bba38-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29543071359 I0114 00:40:33.241671 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:40:33.241733 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="fc3a5200-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-521c9e1f-bfc2-43a6-9700-bdcd32978e13" "latency"=29343017567 I0114 00:40:33.241769 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="fc3a5200-93a3-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29343128167 I0114 00:40:33.832789 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sde under /dev/disk/azure/scsi1/ I0114 00:40:33.832829 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:40:33.832853 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9f868a85-8d57-494b-83c6-fd4f85bda9ae/globalmount with mount options([nouuid]) I0114 00:40:33.832863 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 178 lines ... I0114 00:41:04.246445 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:41:04.246457 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"5"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582","csi.storage.k8s.io/pvc/name":"test.csi.azure.comtnpft","csi.storage.k8s.io/pvc/namespace":"multivolume-7188","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582"} I0114 00:41:04.246615 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:41:10.964449 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:41:10.964470 1 utils.go:79] GRPC request: {} I0114 00:41:10.964531 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:41:21.307524 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:41:21.307582 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2032a573-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a75014df-ecdd-4aa9-9096-226858f9a338" "latency"=17061165794 I0114 00:41:21.307615 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="2032a573-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=17061290894 I0114 00:41:21.311578 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:41:21.311641 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2032b055-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582" "latency"=17064966190 I0114 00:41:21.311667 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="2032b055-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=17065064090 I0114 00:41:21.895512 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun5 by sdc under /dev/disk/azure/scsi1/ I0114 00:41:21.895551 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun5. perfProfile none accountType I0114 00:41:21.895576 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun5 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582/globalmount with mount options([]) I0114 00:41:21.895584 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun5" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun5]) ... skipping 201 lines ... I0114 00:41:46.004314 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582/globalmount /var/lib/kubelet/pods/7b16c0f6-9434-47d1-9b7d-653ec2d1f7e7/volumes/kubernetes.io~csi/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582/mount) I0114 00:41:46.004674 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a75014df-ecdd-4aa9-9096-226858f9a338/globalmount /var/lib/kubelet/pods/7b16c0f6-9434-47d1-9b7d-653ec2d1f7e7/volumes/kubernetes.io~csi/pvc-a75014df-ecdd-4aa9-9096-226858f9a338/mount) I0114 00:41:46.009151 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582/globalmount at /var/lib/kubelet/pods/7b16c0f6-9434-47d1-9b7d-653ec2d1f7e7/volumes/kubernetes.io~csi/pvc-0078e346-a15a-4d5d-9ec5-1f59442a0582/mount successfully I0114 00:41:46.009166 1 utils.go:85] GRPC response: {} I0114 00:41:46.009485 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a75014df-ecdd-4aa9-9096-226858f9a338/globalmount at /var/lib/kubelet/pods/7b16c0f6-9434-47d1-9b7d-653ec2d1f7e7/volumes/kubernetes.io~csi/pvc-a75014df-ecdd-4aa9-9096-226858f9a338/mount successfully I0114 00:41:46.009500 1 utils.go:85] GRPC response: {} I0114 00:41:47.822490 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:41:47.822545 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="328819b1-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-06a9efdf-74bb-40b4-8457-5bd3713eeac8" "latency"=12817122053 I0114 00:41:47.822565 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="328819b1-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=12817226653 I0114 00:41:48.409131 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdf under /dev/disk/azure/scsi1/ I0114 00:41:48.409181 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:41:48.409225 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-06a9efdf-74bb-40b4-8457-5bd3713eeac8/globalmount with mount options([]) I0114 00:41:48.409238 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 160 lines ... I0114 00:42:44.422104 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-aa7e9178-fa6d-43bc-b484-c2689dd37b80 successfully I0114 00:42:44.422120 1 utils.go:85] GRPC response: {} I0114 00:42:59.206570 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:59.206591 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:59.206597 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:59.206579 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:59.206718 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:42:59.265968 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:59.286377 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:42:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="64c45bd6-93a4-11ed-88b1-6045bd9ae695" I0114 00:42:59.292108 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:42:59.292257 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="64c45bd6-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5916802 I0114 00:43:01.871838 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:43:01.871915 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="5a59fb79-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b18f6830-2625-4a08-94c0-0f16c7005d13" "latency"=20059856310 I0114 00:43:01.871947 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="5a59fb79-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=20059977210 I0114 00:43:02.467353 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:43:02.467397 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:43:02.467420 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b18f6830-2625-4a08-94c0-0f16c7005d13/globalmount with mount options([]) I0114 00:43:02.467431 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 49 lines ... I0114 00:43:14.177437 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azvolumes?allowWatchBookmarks=true&resourceVersion=16531&timeout=9m53s&timeoutSeconds=593&watch=true 200 OK in 4 milliseconds I0114 00:43:24.172628 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 82 items received I0114 00:43:24.177680 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azdrivernodes?allowWatchBookmarks=true&resourceVersion=16515&timeout=6m38s&timeoutSeconds=398&watch=true 200 OK in 4 milliseconds I0114 00:43:29.207877 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:29.207979 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:29.207995 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:29.208032 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:43:29.208053 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:43:29.208055 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:29.266222 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:29.286498 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:43:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="76a6041b-93a4-11ed-88b1-6045bd9ae695" I0114 00:43:29.297502 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 10 milliseconds I0114 00:43:29.297942 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="76a6041b-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=11434608 I0114 00:43:40.963585 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:43:40.963609 1 utils.go:79] GRPC request: {} I0114 00:43:40.963659 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:43:52.579265 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:43:52.579329 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="6995ed87-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-01ef8ef6-dbaa-4b8a-ac0a-8ae2d53871a1" "latency"=45208597975 I0114 00:43:52.579365 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="6995ed87-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=45208707275 I0114 00:43:52.585693 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:43:52.585757 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="6995d56b-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-e4261bbb-6b39-482e-9f8c-7dc434047e5e" "latency"=45215632180 I0114 00:43:52.585789 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="6995d56b-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=45215752980 I0114 00:43:53.178947 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ I0114 00:43:53.178948 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sde under /dev/disk/azure/scsi1/ I0114 00:43:53.178987 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:43:53.179012 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType ... skipping 50 lines ... I0114 00:43:54.978770 1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-e4261bbb-6b39-482e-9f8c-7dc434047e5e","volume_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-e4261bbb-6b39-482e-9f8c-7dc434047e5e/dev/ca06dff5-e334-48b6-a970-45339d4ac5f6"} I0114 00:43:54.979266 1 utils.go:85] GRPC response: {"usage":[{"total":5368709120,"unit":1}]} I0114 00:43:59.208586 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:59.208611 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:59.208657 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:59.208663 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:59.208769 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:43:59.266979 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:59.286188 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:43:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="88879aa1-93a4-11ed-88b1-6045bd9ae695" I0114 00:43:59.297261 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 10 milliseconds I0114 00:43:59.297433 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="88879aa1-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=11272608 I0114 00:44:07.891417 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:44:07.891477 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="858d0ee5-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ce7de917-418e-4bae-a13e-057f09eda6e9" "latency"=13602649639 I0114 00:44:07.891504 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="858d0ee5-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=13602775439 I0114 00:44:08.478331 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdf under /dev/disk/azure/scsi1/ I0114 00:44:08.478380 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:44:08.478418 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ce7de917-418e-4bae-a13e-057f09eda6e9/globalmount with mount options([]) I0114 00:44:08.478435 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 74 lines ... I0114 00:44:19.045627 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:44:19.045645 1 utils.go:79] GRPC request: {} I0114 00:44:19.045688 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:44:19.046309 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:44:19.046323 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"4"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62603cfb-1cbe-48f0-9a30-83b88d7ae82c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-62603cfb-1cbe-48f0-9a30-83b88d7ae82c","csi.storage.k8s.io/pvc/name":"test.csi.azure.com9chsj","csi.storage.k8s.io/pvc/namespace":"multivolume-6527","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-62603cfb-1cbe-48f0-9a30-83b88d7ae82c"} I0114 00:44:19.046501 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-62603cfb-1cbe-48f0-9a30-83b88d7ae82c-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:44:23.135096 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:44:23.135169 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="944ecaf4-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-62603cfb-1cbe-48f0-9a30-83b88d7ae82c" "latency"=4088601957 I0114 00:44:23.135199 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="944ecaf4-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=4088705257 I0114 00:44:23.725186 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun4 by sdg under /dev/disk/azure/scsi1/ I0114 00:44:23.725237 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun4. perfProfile none accountType I0114 00:44:23.725272 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun4 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-62603cfb-1cbe-48f0-9a30-83b88d7ae82c/globalmount with mount options([]) I0114 00:44:23.725290 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun4" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun4]) ... skipping 126 lines ... I0114 00:44:52.162102 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-368b8e73-ac92-4fff-9040-5c9ce4f1b90a/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-368b8e73-ac92-4fff-9040-5c9ce4f1b90a","csi.storage.k8s.io/pvc/name":"test.csi.azure.com42dbv","csi.storage.k8s.io/pvc/namespace":"multivolume-9207","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-368b8e73-ac92-4fff-9040-5c9ce4f1b90a"} I0114 00:44:52.162280 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-368b8e73-ac92-4fff-9040-5c9ce4f1b90a-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:44:59.209708 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:44:59.209854 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:44:59.209870 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:44:59.209947 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:44:59.209949 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:44:59.268047 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:44:59.286262 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:44:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ac4ae401-93a4-11ed-88b1-6045bd9ae695" I0114 00:44:59.292403 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:44:59.292640 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ac4ae401-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6385005 I0114 00:45:10.964234 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:45:10.964252 1 utils.go:79] GRPC request: {} I0114 00:45:10.964288 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:45:18.188457 1 reflector.go:559] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.Node total 146 items received I0114 00:45:18.190478 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=17815&timeout=8m45s&timeoutSeconds=525&watch=true 200 OK in 1 milliseconds I0114 00:45:29.195800 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:45:29.195854 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="a80bdbcf-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-368b8e73-ac92-4fff-9040-5c9ce4f1b90a" "latency"=37033498039 I0114 00:45:29.195881 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="a80bdbcf-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=37033618039 I0114 00:45:29.210783 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:29.210800 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:29.210849 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:29.210862 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync ... skipping 56 lines ... I0114 00:45:30.109546 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:45:30.109557 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d","csi.storage.k8s.io/pvc/name":"test.csi.azure.coml9xqd","csi.storage.k8s.io/pvc/namespace":"multivolume-2726","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d"} I0114 00:45:30.109725 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:45:40.964741 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:45:40.964764 1 utils.go:79] GRPC request: {} I0114 00:45:40.964810 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:45:49.835463 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:45:49.835511 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="beaa2dd0-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d" "latency"=19725736746 I0114 00:45:49.835532 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="beaa2dd0-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19725830946 I0114 00:45:49.843350 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:45:49.843414 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="be9b9bf6-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7" "latency"=19829109411 I0114 00:45:49.843453 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="be9b9bf6-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19829228611 I0114 00:45:50.421160 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sde under /dev/disk/azure/scsi1/ I0114 00:45:50.421196 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:45:50.421221 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/globalmount with mount options([]) I0114 00:45:50.421230 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 82 lines ... I0114 00:45:54.980282 1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d","volume_path":"/var/lib/kubelet/pods/d20217ad-ca32-43c8-8652-28a46ba4b171/volumes/kubernetes.io~csi/pvc-9d2638a4-53a6-4e86-9508-2ad657aa634d/mount"} I0114 00:45:54.980345 1 utils.go:85] GRPC response: {"usage":[{"available":5179580416,"total":5196382208,"unit":1,"used":24576},{"available":327669,"total":327680,"unit":2,"used":11}]} I0114 00:45:59.212557 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:59.212583 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:59.212602 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:59.212572 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:59.212729 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:45:59.268827 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:45:59.286058 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:45:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="d00e1d4e-93a4-11ed-88b1-6045bd9ae695" I0114 00:45:59.299648 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 13 milliseconds I0114 00:45:59.299825 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="d00e1d4e-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=13914113 I0114 00:46:00.157265 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:46:00.157333 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="cacd442f-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1e846a2b-c218-42c5-ae8f-a3f00090de5c" "latency"=9684945600 I0114 00:46:00.157357 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="cacd442f-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9685046501 I0114 00:46:00.765128 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sdf under /dev/disk/azure/scsi1/ I0114 00:46:00.765182 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:46:00.765215 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun3 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1e846a2b-c218-42c5-ae8f-a3f00090de5c/globalmount with mount options([]) I0114 00:46:00.765228 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun3" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun3]) ... skipping 195 lines ... I0114 00:46:12.566760 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/globalmount /var/lib/kubelet/pods/3ca60294-b144-44ff-912d-178b54b2c1fa/volumes/kubernetes.io~csi/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/mount) I0114 00:46:12.567757 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/globalmount /var/lib/kubelet/pods/3ca60294-b144-44ff-912d-178b54b2c1fa/volumes/kubernetes.io~csi/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/mount) I0114 00:46:12.568704 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/globalmount at /var/lib/kubelet/pods/3ca60294-b144-44ff-912d-178b54b2c1fa/volumes/kubernetes.io~csi/pvc-9e8d313c-fe9c-4939-bda7-b324762d9bf7/mount successfully I0114 00:46:12.568729 1 utils.go:85] GRPC response: {} I0114 00:46:15.185337 1 reflector.go:559] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: Watch close - *v1.CustomResourceDefinition total 2 items received I0114 00:46:15.187208 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=17898&timeout=5m41s&timeoutSeconds=341&watch=true 200 OK in 1 milliseconds I0114 00:46:20.567848 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:46:20.567917 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="d16714af-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b4e8e259-a4a0-4891-ba34-ae643c46a446" "latency"=19021142650 I0114 00:46:20.567936 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="d16714af-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19021260350 I0114 00:46:21.154918 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun4 by sdg under /dev/disk/azure/scsi1/ I0114 00:46:21.154958 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun4. perfProfile none accountType I0114 00:46:21.154969 1 utils.go:85] GRPC response: {} I0114 00:46:21.164293 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 122 lines ... I0114 00:46:42.438778 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"5"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-065ea936-94e0-4993-849c-bb4cad743e63/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-065ea936-94e0-4993-849c-bb4cad743e63","csi.storage.k8s.io/pvc/name":"restored-pvc-tester-47c76-my-volume","csi.storage.k8s.io/pvc/namespace":"snapshotting-2467","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-065ea936-94e0-4993-849c-bb4cad743e63"} I0114 00:46:42.438931 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-065ea936-94e0-4993-849c-bb4cad743e63-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:46:59.217287 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:59.217365 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:59.217482 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:59.217499 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:59.217485 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:46:59.269858 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:59.286082 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:46:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f3d168f4-93a4-11ed-88b1-6045bd9ae695" I0114 00:46:59.296087 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 9 milliseconds I0114 00:46:59.296751 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f3d168f4-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=10680204 I0114 00:47:10.963481 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:47:10.963502 1 utils.go:79] GRPC request: {} I0114 00:47:10.963535 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:47:21.355254 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:47:21.355324 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="e9c6bd79-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-065ea936-94e0-4993-849c-bb4cad743e63" "latency"=38916328846 I0114 00:47:21.355351 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e9c6bd79-93a4-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=38916437746 I0114 00:47:21.925131 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun5 by sdc under /dev/disk/azure/scsi1/ I0114 00:47:21.925177 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun5. perfProfile none accountType I0114 00:47:21.925212 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun5 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-065ea936-94e0-4993-849c-bb4cad743e63/globalmount with mount options([]) I0114 00:47:21.925224 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun5" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun5]) ... skipping 44 lines ... I0114 00:47:33.777239 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:47:33.777256 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-120ff94e-fd53-4207-ade5-9aa941e51920/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-120ff94e-fd53-4207-ade5-9aa941e51920","csi.storage.k8s.io/pvc/name":"test.csi.azure.comnh59t","csi.storage.k8s.io/pvc/namespace":"volumeio-1129","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-120ff94e-fd53-4207-ade5-9aa941e51920"} I0114 00:47:33.777471 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-120ff94e-fd53-4207-ade5-9aa941e51920-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:47:40.964607 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:47:40.964631 1 utils.go:79] GRPC request: {} I0114 00:47:40.964687 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:47:51.979732 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:47:51.979796 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="086060b0-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-120ff94e-fd53-4207-ade5-9aa941e51920" "latency"=18202238916 I0114 00:47:51.979828 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="086060b0-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=18202387116 I0114 00:47:52.573394 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:47:52.573444 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:47:52.573478 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-120ff94e-fd53-4207-ade5-9aa941e51920/globalmount with mount options([]) I0114 00:47:52.573494 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 106 lines ... I0114 00:48:14.655863 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-9c00b7f4-b494-4a67-aded-5aca7a102c82-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:48:22.256821 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 19 items received I0114 00:48:22.260453 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dk8s-agentpool1-35908214-vmss000001&resourceVersion=19364&timeout=9m44s&timeoutSeconds=584&watch=true 200 OK in 3 milliseconds I0114 00:48:29.219717 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:29.219774 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:29.219793 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:29.219833 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:48:29.219838 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:29.219859 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:48:29.272067 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:29.286266 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:48:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2976592d-93a5-11ed-88b1-6045bd9ae695" I0114 00:48:29.294528 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 8 milliseconds I0114 00:48:29.294699 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2976592d-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=8445607 I0114 00:48:31.382284 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:48:31.382345 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="20bdd49f-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d5c3b2f6-5e56-41b8-84a2-44b6cafe5a3c" "latency"=16727025246 I0114 00:48:31.382385 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="20bdd49f-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=16727172847 I0114 00:48:31.389794 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:48:31.389852 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="20bdeda6-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9c00b7f4-b494-4a67-aded-5aca7a102c82" "latency"=16733941252 I0114 00:48:31.389878 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="20bdeda6-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=16734029552 I0114 00:48:31.973431 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ I0114 00:48:31.973431 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:48:31.973471 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:48:31.973494 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType ... skipping 81 lines ... I0114 00:48:32.975862 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:48:32.975881 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"3"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-254b9713-6ec0-4c32-a336-3116fda13aa8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-254b9713-6ec0-4c32-a336-3116fda13aa8","csi.storage.k8s.io/pvc/name":"test.csi.azure.comlf4gv","csi.storage.k8s.io/pvc/namespace":"multivolume-8546","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-254b9713-6ec0-4c32-a336-3116fda13aa8"} I0114 00:48:32.976045 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-254b9713-6ec0-4c32-a336-3116fda13aa8-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:48:40.963315 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:48:40.963333 1 utils.go:79] GRPC request: {} I0114 00:48:40.963371 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:48:41.661524 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:48:41.661575 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2b99f456-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-eb400e99-16b7-4f41-b77d-913244f5ed29" "latency"=8786485946 I0114 00:48:41.661602 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="2b99f456-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=8786568846 I0114 00:48:41.662632 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:48:41.662688 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2ba95db4-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-254b9713-6ec0-4c32-a336-3116fda13aa8" "latency"=8686594366 I0114 00:48:41.662719 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="2ba95db4-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=8686685966 I0114 00:48:42.268540 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sdf under /dev/disk/azure/scsi1/ I0114 00:48:42.268589 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:48:42.268625 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun3 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-254b9713-6ec0-4c32-a336-3116fda13aa8/globalmount with mount options([nouuid]) I0114 00:48:42.268645 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun3" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun3]) ... skipping 239 lines ... I0114 00:48:56.340291 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eb400e99-16b7-4f41-b77d-913244f5ed29/globalmount successfully I0114 00:48:56.340308 1 utils.go:85] GRPC response: {} W0114 00:48:56.344799 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-254b9713-6ec0-4c32-a336-3116fda13aa8/globalmount" is not a mountpoint, deleting I0114 00:48:56.344852 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-254b9713-6ec0-4c32-a336-3116fda13aa8/globalmount successfully I0114 00:48:56.344859 1 utils.go:85] GRPC response: {} I0114 00:48:59.221229 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:59.221348 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:48:59.221370 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:59.221389 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:59.221401 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:59.272679 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:59.286937 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:48:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="3b58150d-93a5-11ed-88b1-6045bd9ae695" I0114 00:48:59.294221 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 7 milliseconds I0114 00:48:59.294398 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="3b58150d-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7516006 I0114 00:49:02.171680 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:49:02.171756 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="32fb4f83-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-4f98d6a1-74bc-4dfe-a7c9-48421248334c" "latency"=16914538344 I0114 00:49:02.171791 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="32fb4f83-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=16914673044 I0114 00:49:02.749916 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun4 by sdg under /dev/disk/azure/scsi1/ I0114 00:49:02.749962 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun4. perfProfile none accountType I0114 00:49:02.749996 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun4 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f98d6a1-74bc-4dfe-a7c9-48421248334c/globalmount with mount options([nouuid]) I0114 00:49:02.750010 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun4" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun4]) ... skipping 316 lines ... I0114 00:49:40.964218 1 utils.go:79] GRPC request: {} I0114 00:49:40.964270 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:49:59.222343 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:49:59.222372 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:49:59.222408 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:49:59.222422 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:49:59.222499 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:49:59.274644 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:49:59.286957 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:49:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="5f1b5ca6-93a5-11ed-88b1-6045bd9ae695" I0114 00:49:59.297035 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 9 milliseconds I0114 00:49:59.297210 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="5f1b5ca6-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=10279304 I0114 00:50:02.178196 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 63 items received I0114 00:50:02.180798 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azdrivernodes?allowWatchBookmarks=true&resourceVersion=20666&timeout=5m3s&timeoutSeconds=303&watch=true 200 OK in 2 milliseconds I0114 00:50:10.963888 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:50:10.963908 1 utils.go:79] GRPC request: {} I0114 00:50:10.963952 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:50:12.409626 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:50:12.409679 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="5157eae3-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-c4c51449-9681-41a4-a0a6-00c12629439e" "latency"=36213912675 I0114 00:50:12.409708 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="5157eae3-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=36214032575 I0114 00:50:12.739319 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:50:12.739336 1 utils.go:79] GRPC request: {} I0114 00:50:12.739369 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:50:12.741978 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 49 lines ... I0114 00:50:13.742089 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f7aa9d47-cc4e-4952-9bc9-964f9f4712ae/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-f7aa9d47-cc4e-4952-9bc9-964f9f4712ae","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4w55x","csi.storage.k8s.io/pvc/namespace":"multivolume-6557","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-f7aa9d47-cc4e-4952-9bc9-964f9f4712ae"} I0114 00:50:13.742300 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-f7aa9d47-cc4e-4952-9bc9-964f9f4712ae-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:50:29.223460 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:29.223479 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:29.223502 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:29.223522 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:29.223571 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:50:29.223602 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:50:29.274784 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:29.286129 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:50:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="70fcdfd5-93a5-11ed-88b1-6045bd9ae695" I0114 00:50:29.290933 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 4 milliseconds I0114 00:50:29.291619 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="70fcdfd5-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5505107 I0114 00:50:38.743770 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:50:38.743846 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="6720ac91-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-393a9fa0-9970-4e65-9637-01019d99a6a8" "latency"=26000264171 I0114 00:50:38.743882 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="6720ac91-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=26000378271 I0114 00:50:38.749456 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:50:38.749513 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="67b91298-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-f7aa9d47-cc4e-4952-9bc9-964f9f4712ae" "latency"=25007154404 I0114 00:50:38.749554 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="67b91298-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=25007281004 I0114 00:50:39.353472 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ I0114 00:50:39.353514 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:50:39.353539 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-393a9fa0-9970-4e65-9637-01019d99a6a8/globalmount with mount options([]) I0114 00:50:39.353547 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 127 lines ... I0114 00:50:58.327180 1 utils.go:79] GRPC request: {} I0114 00:50:58.327214 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:50:58.327921 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:50:58.327936 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-78242b98-9f47-4927-ae20-76b7afe7b154","csi.storage.k8s.io/pvc/name":"test.csi.azure.com7tq68","csi.storage.k8s.io/pvc/namespace":"multivolume-1180","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-78242b98-9f47-4927-ae20-76b7afe7b154"} I0114 00:50:58.328099 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-78242b98-9f47-4927-ae20-76b7afe7b154-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:50:59.223900 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:59.224008 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:50:59.224023 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:59.224055 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:59.224085 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:59.275346 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:50:59.286607 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:50:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="82de948d-93a5-11ed-88b1-6045bd9ae695" I0114 00:50:59.295370 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 8 milliseconds ... skipping 2 lines ... I0114 00:51:10.964399 1 utils.go:79] GRPC request: {} I0114 00:51:10.964434 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:51:29.224475 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:51:29.224510 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:51:29.224528 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:51:29.224547 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:51:29.224583 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:51:29.275729 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:51:29.286955 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:51:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="94c045c1-93a5-11ed-88b1-6045bd9ae695" I0114 00:51:29.290994 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 3 milliseconds I0114 00:51:29.291648 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="94c045c1-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=4718204 I0114 00:51:34.972317 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:51:34.972368 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="824c544d-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-78242b98-9f47-4927-ae20-76b7afe7b154" "latency"=36644206150 I0114 00:51:34.972395 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="824c544d-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=36644309450 I0114 00:51:36.690941 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:51:36.690984 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:51:36.691010 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount with mount options([nouuid]) I0114 00:51:36.691020 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 68 lines ... I0114 00:51:44.016700 1 nodeserver_v2.go:257] NodeUnstageVolume: unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount I0114 00:51:44.016743 1 mount_helper_common.go:99] "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount" is a mountpoint, unmounting I0114 00:51:44.016750 1 mount_linux.go:294] Unmounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount W0114 00:51:44.026954 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount" is not a mountpoint, deleting I0114 00:51:44.027021 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount successfully I0114 00:51:44.027033 1 utils.go:85] GRPC response: {} I0114 00:51:45.277389 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:51:45.277441 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="9ac5dccc-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-5c17e6ab-179f-49c7-881c-3ff848868108" "latency"=5887478728 I0114 00:51:45.277471 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="9ac5dccc-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=5887581528 I0114 00:51:45.277544 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:51:45.277606 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="9ac5c54c-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-e28181ae-9599-4968-91ef-169281111b28" "latency"=5888241228 I0114 00:51:45.277626 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="9ac5c54c-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=5888347428 I0114 00:51:45.855700 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 00:51:45.855756 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:51:45.855794 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e28181ae-9599-4968-91ef-169281111b28/globalmount with mount options([]) I0114 00:51:45.855812 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 88 lines ... I0114 00:52:03.294062 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:52:03.294077 1 utils.go:79] GRPC request: {} I0114 00:52:03.294112 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:52:03.294653 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:52:03.294667 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-78242b98-9f47-4927-ae20-76b7afe7b154","csi.storage.k8s.io/pvc/name":"test.csi.azure.com7tq68","csi.storage.k8s.io/pvc/namespace":"multivolume-1180","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-78242b98-9f47-4927-ae20-76b7afe7b154"} I0114 00:52:03.294829 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-78242b98-9f47-4927-ae20-76b7afe7b154-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:52:08.304671 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:52:08.304733 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="a9057697-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-78242b98-9f47-4927-ae20-76b7afe7b154" "latency"=5009843114 I0114 00:52:08.304765 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="a9057697-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=5009959914 I0114 00:52:08.890343 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:52:08.890391 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:52:08.890416 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-78242b98-9f47-4927-ae20-76b7afe7b154/globalmount with mount options([nouuid]) I0114 00:52:08.890429 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 75 lines ... I0114 00:52:21.098918 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-5c17e6ab-179f-49c7-881c-3ff848868108/globalmount successfully I0114 00:52:21.098935 1 utils.go:85] GRPC response: {} W0114 00:52:21.102109 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e28181ae-9599-4968-91ef-169281111b28/globalmount" is not a mountpoint, deleting I0114 00:52:21.102152 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e28181ae-9599-4968-91ef-169281111b28/globalmount successfully I0114 00:52:21.102167 1 utils.go:85] GRPC response: {} I0114 00:52:29.228439 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:29.228574 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:52:29.228584 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:29.228601 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:29.228617 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:29.276927 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:29.286177 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:52:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="b8836d36-93a5-11ed-88b1-6045bd9ae695" I0114 00:52:29.292842 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 6 milliseconds I0114 00:52:29.293082 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="b8836d36-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6935405 I0114 00:52:40.964333 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:52:40.964354 1 utils.go:79] GRPC request: {} I0114 00:52:40.964400 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:52:53.871861 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:52:53.871919 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="b355894f-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8a67ad98-503f-4dd6-881a-623d48245276" "latency"=33275041829 I0114 00:52:53.871947 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b355894f-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=33275163529 I0114 00:52:54.451115 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sdf under /dev/disk/azure/scsi1/ I0114 00:52:54.451161 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:52:54.451191 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun3 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8a67ad98-503f-4dd6-881a-623d48245276/globalmount with mount options([nouuid]) I0114 00:52:54.451203 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun3" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun3]) ... skipping 99 lines ... I0114 00:53:53.663480 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b","csi.storage.k8s.io/pvc/name":"test.csi.azure.com9zwvc","csi.storage.k8s.io/pvc/namespace":"multivolume-4468","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b"} I0114 00:53:53.663618 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:53:59.236536 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:53:59.236555 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:53:59.236585 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:53:59.236604 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:53:59.236668 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:53:59.279779 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:53:59.286168 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:53:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ee28553c-93a5-11ed-88b1-6045bd9ae695" I0114 00:53:59.293815 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 7 milliseconds I0114 00:53:59.293962 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ee28553c-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7842008 I0114 00:54:03.191458 1 reflector.go:559] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.Node total 125 items received I0114 00:54:03.194642 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=22772&timeout=8m22s&timeoutSeconds=502&watch=true 200 OK in 3 milliseconds I0114 00:54:03.568111 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:54:03.568168 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="eace6808-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b" "latency"=9904480479 I0114 00:54:03.568198 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="eace6808-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9904591679 I0114 00:54:04.027312 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:54:04.027332 1 utils.go:79] GRPC request: {} I0114 00:54:04.027373 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:54:04.030426 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 37 lines ... I0114 00:54:04.230378 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b/globalmount at /var/lib/kubelet/pods/5e62391f-f0d0-4b40-9396-a7d8138df6d6/volumes/kubernetes.io~csi/pvc-0ca5d71c-d933-4ccf-b36b-0a9e6e61961b/mount successfully I0114 00:54:04.230401 1 utils.go:85] GRPC response: {} I0114 00:54:10.963868 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:54:10.963881 1 utils.go:79] GRPC request: {} I0114 00:54:10.963911 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:54:29.238621 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:54:29.238698 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:54:29.238750 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:54:29.238774 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:54:29.238763 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:54:29.279839 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:54:29.286036 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:54:29Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="0009f3fd-93a6-11ed-88b1-6045bd9ae695" I0114 00:54:29.293415 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 7 milliseconds I0114 00:54:29.293583 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="0009f3fd-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7572707 I0114 00:54:39.131084 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:54:39.131150 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="f0fc79eb-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-401e0d24-b090-461d-a5f8-d163af7a428d" "latency"=35099212649 I0114 00:54:39.131184 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="f0fc79eb-93a5-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=35099323749 I0114 00:54:39.712123 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdc under /dev/disk/azure/scsi1/ I0114 00:54:39.712179 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:54:39.712208 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/globalmount with mount options([]) I0114 00:54:39.712224 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 53 lines ... I0114 00:54:44.715336 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/globalmount","target_path":"/var/lib/kubelet/pods/4fa69ccd-9a99-48d2-ac2e-627bd0c103c2/volumes/kubernetes.io~csi/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-401e0d24-b090-461d-a5f8-d163af7a428d","csi.storage.k8s.io/pvc/name":"test.csi.azure.comvhwrp","csi.storage.k8s.io/pvc/namespace":"multivolume-7332","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-401e0d24-b090-461d-a5f8-d163af7a428d"} I0114 00:54:44.715502 1 nodeserver_v2.go:348] NodePublishVolume: mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/globalmount at /var/lib/kubelet/pods/4fa69ccd-9a99-48d2-ac2e-627bd0c103c2/volumes/kubernetes.io~csi/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/mount I0114 00:54:44.715526 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/globalmount /var/lib/kubelet/pods/4fa69ccd-9a99-48d2-ac2e-627bd0c103c2/volumes/kubernetes.io~csi/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/mount) I0114 00:54:44.716377 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/globalmount /var/lib/kubelet/pods/4fa69ccd-9a99-48d2-ac2e-627bd0c103c2/volumes/kubernetes.io~csi/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/mount) I0114 00:54:44.717323 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/globalmount at /var/lib/kubelet/pods/4fa69ccd-9a99-48d2-ac2e-627bd0c103c2/volumes/kubernetes.io~csi/pvc-401e0d24-b090-461d-a5f8-d163af7a428d/mount successfully I0114 00:54:44.717336 1 utils.go:85] GRPC response: {} I0114 00:54:49.595182 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:54:49.595239 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="065beed9-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-e747963d-24c0-4574-b61c-e3bd4c7b43bf" "latency"=9705563206 I0114 00:54:49.595266 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="065beed9-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9705668806 I0114 00:54:50.193526 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sde under /dev/disk/azure/scsi1/ I0114 00:54:50.193567 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:54:50.193593 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e747963d-24c0-4574-b61c-e3bd4c7b43bf/globalmount with mount options([]) I0114 00:54:50.193601 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 153 lines ... I0114 00:56:17.832394 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:56:17.832412 1 utils.go:79] GRPC request: {} I0114 00:56:17.832441 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:56:17.833084 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:56:17.833110 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ae1f1343-5942-49f2-af27-671e12479f1c","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8tgmb","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-6095","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ae1f1343-5942-49f2-af27-671e12479f1c"} I0114 00:56:17.833259 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-ae1f1343-5942-49f2-af27-671e12479f1c-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:56:28.704376 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:56:28.704496 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="40bcf291-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ae1f1343-5942-49f2-af27-671e12479f1c" "latency"=10871099187 I0114 00:56:28.704531 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="40bcf291-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=10871285488 I0114 00:56:29.245154 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:56:29.245183 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:56:29.245195 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:56:29.245168 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync ... skipping 41 lines ... I0114 00:56:36.052868 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:56:36.052888 1 utils.go:79] GRPC request: {} I0114 00:56:36.052931 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:56:36.053580 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:56:36.053594 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d8b2a1dd-6a91-4cfd-8257-4ef16ac7a658/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d8b2a1dd-6a91-4cfd-8257-4ef16ac7a658","csi.storage.k8s.io/pvc/name":"test.csi.azure.comrnhmp","csi.storage.k8s.io/pvc/namespace":"volume-4789","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-d8b2a1dd-6a91-4cfd-8257-4ef16ac7a658"} I0114 00:56:36.053778 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-d8b2a1dd-6a91-4cfd-8257-4ef16ac7a658-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:56:39.011150 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:56:39.011207 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="4b992d9f-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d8b2a1dd-6a91-4cfd-8257-4ef16ac7a658" "latency"=2957368399 I0114 00:56:39.011234 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="4b992d9f-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=2957468499 I0114 00:56:39.164839 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnpublishVolume I0114 00:56:39.164861 1 utils.go:79] GRPC request: {"target_path":"/var/lib/kubelet/pods/0b9b30e1-eba6-4acd-b8bc-aa5b0a8d5529/volumes/kubernetes.io~csi/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/mount","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ae1f1343-5942-49f2-af27-671e12479f1c"} I0114 00:56:39.164916 1 nodeserver_v2.go:369] NodeUnpublishVolume: unmounting volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ae1f1343-5942-49f2-af27-671e12479f1c on /var/lib/kubelet/pods/0b9b30e1-eba6-4acd-b8bc-aa5b0a8d5529/volumes/kubernetes.io~csi/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/mount I0114 00:56:39.164953 1 mount_helper_common.go:99] "/var/lib/kubelet/pods/0b9b30e1-eba6-4acd-b8bc-aa5b0a8d5529/volumes/kubernetes.io~csi/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/mount" is a mountpoint, unmounting ... skipping 53 lines ... I0114 00:56:46.118966 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:56:46.118978 1 utils.go:79] GRPC request: {} I0114 00:56:46.119007 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:56:46.119717 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:56:46.119731 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-c21091a5-9b35-4126-b2a4-a2a843d05353","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-c21091a5-9b35-4126-b2a4-a2a843d05353","csi.storage.k8s.io/pvc/name":"test.csi.azure.comszkdb","csi.storage.k8s.io/pvc/namespace":"multivolume-9730","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-c21091a5-9b35-4126-b2a4-a2a843d05353"} I0114 00:56:46.119934 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-c21091a5-9b35-4126-b2a4-a2a843d05353-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:56:50.628402 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:56:50.628497 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="5199258d-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-c21091a5-9b35-4126-b2a4-a2a843d05353" "latency"=4508478420 I0114 00:56:50.628526 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="5199258d-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=4508637420 I0114 00:56:51.257952 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sde under /dev/disk/azure/scsi1/ I0114 00:56:51.257991 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:56:51.258004 1 utils.go:85] GRPC response: {} I0114 00:56:51.263296 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 115 lines ... I0114 00:57:37.149611 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c","csi.storage.k8s.io/pvc/name":"test.csi.azure.com22x7c","csi.storage.k8s.io/pvc/namespace":"multivolume-6990","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c"} I0114 00:57:37.149733 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-89f28cec-b6ae-4d85-9729-a028acff3ebc-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:57:37.149810 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:57:40.963496 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:57:40.963513 1 utils.go:79] GRPC request: {} I0114 00:57:40.963555 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:57:49.888049 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:57:49.888115 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7003b0c1-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c" "latency"=12738247771 I0114 00:57:49.888145 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7003b0c1-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=12738361871 I0114 00:57:49.891700 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:57:49.891741 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7003ad86-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-89f28cec-b6ae-4d85-9729-a028acff3ebc" "latency"=12741929673 I0114 00:57:49.891764 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7003ad86-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=12742068073 I0114 00:57:50.481564 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:57:50.481613 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:57:50.481650 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c/globalmount with mount options([]) I0114 00:57:50.481666 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 82 lines ... I0114 00:57:57.671569 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"3"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-12239193-1090-46a5-8054-3df9c77be9c9/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-12239193-1090-46a5-8054-3df9c77be9c9","csi.storage.k8s.io/pvc/name":"inline-volume-tester-ftdb2-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-8","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-12239193-1090-46a5-8054-3df9c77be9c9"} I0114 00:57:57.671738 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-12239193-1090-46a5-8054-3df9c77be9c9-k8s-agentpool1-35908214-vmss000001-attachment) I0114 00:57:59.247217 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:57:59.247236 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:57:59.247264 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:57:59.247222 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:57:59.247355 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:57:59.284465 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:57:59.286686 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:57:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7d35825b-93a6-11ed-88b1-6045bd9ae695" I0114 00:57:59.292181 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 5 milliseconds I0114 00:57:59.293057 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7d35825b-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6384007 I0114 00:58:06.261347 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 30 items received I0114 00:58:06.263060 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dk8s-agentpool1-35908214-vmss000001&resourceVersion=24625&timeout=9m23s&timeoutSeconds=563&watch=true 200 OK in 1 milliseconds ... skipping 35 lines ... W0114 00:58:07.840884 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89f28cec-b6ae-4d85-9729-a028acff3ebc/globalmount" is not a mountpoint, deleting I0114 00:58:07.840942 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89f28cec-b6ae-4d85-9729-a028acff3ebc/globalmount successfully I0114 00:58:07.840958 1 utils.go:85] GRPC response: {} I0114 00:58:10.963569 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:58:10.963591 1 utils.go:79] GRPC request: {} I0114 00:58:10.963636 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:58:15.442414 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:58:15.442480 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="7c3f16a8-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-12239193-1090-46a5-8054-3df9c77be9c9" "latency"=17770684509 I0114 00:58:15.442511 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7c3f16a8-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=17770796809 I0114 00:58:16.037851 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sde under /dev/disk/azure/scsi1/ I0114 00:58:16.037894 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:58:16.037920 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun3 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-12239193-1090-46a5-8054-3df9c77be9c9/globalmount with mount options([]) I0114 00:58:16.037928 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun3" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun3]) ... skipping 161 lines ... I0114 01:00:00.418030 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 01:00:00.418042 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ff110a78-9721-40f9-b1a7-35d3a372ea8d/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ff110a78-9721-40f9-b1a7-35d3a372ea8d","csi.storage.k8s.io/pvc/name":"test.csi.azure.comwbmxz","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-5246","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ff110a78-9721-40f9-b1a7-35d3a372ea8d"} I0114 01:00:00.418226 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-ff110a78-9721-40f9-b1a7-35d3a372ea8d-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:00:10.963518 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 01:00:10.963536 1 utils.go:79] GRPC request: {} I0114 01:00:10.963585 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 01:00:14.624853 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:00:14.624915 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="c568b76f-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ff110a78-9721-40f9-b1a7-35d3a372ea8d" "latency"=14206611165 I0114 01:00:14.624950 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="c568b76f-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=14206748965 I0114 01:00:16.338246 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 01:00:16.338285 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 01:00:16.338310 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ff110a78-9721-40f9-b1a7-35d3a372ea8d/globalmount with mount options([]) I0114 01:00:16.338318 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 148 lines ... I0114 01:01:07.859313 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"2"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-266cf92a-ddad-4a43-8712-6dc83a986e65/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-266cf92a-ddad-4a43-8712-6dc83a986e65","csi.storage.k8s.io/pvc/name":"test.csi.azure.comzcw2s","csi.storage.k8s.io/pvc/namespace":"multivolume-1515","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-266cf92a-ddad-4a43-8712-6dc83a986e65"} I0114 01:01:07.859465 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-bbf0c35e-3a0f-4357-82e5-cfa417e5d363-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:01:07.859472 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-266cf92a-ddad-4a43-8712-6dc83a986e65-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:01:10.963509 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 01:01:10.963528 1 utils.go:79] GRPC request: {} I0114 01:01:10.963564 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 01:01:28.265447 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:01:28.265495 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ed9b6f50-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-bbf0c35e-3a0f-4357-82e5-cfa417e5d363" "latency"=20405979762 I0114 01:01:28.265513 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="ed9b6f50-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=20406065962 I0114 01:01:28.271370 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:01:28.271436 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="ed9b6f79-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-266cf92a-ddad-4a43-8712-6dc83a986e65" "latency"=20411907667 I0114 01:01:28.271469 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="ed9b6f79-93a6-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=20412008067 I0114 01:01:28.870804 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdc under /dev/disk/azure/scsi1/ I0114 01:01:28.870848 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 01:01:28.870880 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-266cf92a-ddad-4a43-8712-6dc83a986e65/globalmount with mount options([nouuid]) I0114 01:01:28.870894 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 141 lines ... I0114 01:02:54.343082 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 01:02:54.343095 1 utils.go:79] GRPC request: {} I0114 01:02:54.343118 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 01:02:54.343760 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 01:02:54.343773 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b729c55e-39c3-4724-8a09-7d1e721de067/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-b729c55e-39c3-4724-8a09-7d1e721de067","csi.storage.k8s.io/pvc/name":"test.csi.azure.comm8kc8","csi.storage.k8s.io/pvc/namespace":"provisioning-2979","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-b729c55e-39c3-4724-8a09-7d1e721de067"} I0114 01:02:54.343949 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-b729c55e-39c3-4724-8a09-7d1e721de067-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:02:56.066324 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:02:56.066375 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="2d13ad76-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b729c55e-39c3-4724-8a09-7d1e721de067" "latency"=1722345735 I0114 01:02:56.066403 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="2d13ad76-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=1722465235 I0114 01:02:57.742234 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 01:02:57.742275 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 01:02:57.742303 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b729c55e-39c3-4724-8a09-7d1e721de067/globalmount with mount options([]) I0114 01:02:57.742314 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 124 lines ... I0114 01:05:37.463708 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 01:05:37.463724 1 utils.go:79] GRPC request: {} I0114 01:05:37.463761 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 01:05:37.464399 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 01:05:37.464421 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-327d4358-8771-40dd-b153-e3955b859ad0/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-327d4358-8771-40dd-b153-e3955b859ad0","csi.storage.k8s.io/pvc/name":"pvc-snapshottable-tester-m48vx-my-volume","csi.storage.k8s.io/pvc/namespace":"snapshotting-64","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-327d4358-8771-40dd-b153-e3955b859ad0"} I0114 01:05:37.464602 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-327d4358-8771-40dd-b153-e3955b859ad0-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:05:38.966668 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:05:38.966732 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="8e4de9e8-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-327d4358-8771-40dd-b153-e3955b859ad0" "latency"=1502071104 I0114 01:05:38.966765 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="8e4de9e8-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=1502171904 I0114 01:05:40.628227 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 01:05:40.628267 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 01:05:40.628292 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-327d4358-8771-40dd-b153-e3955b859ad0/globalmount with mount options([]) I0114 01:05:40.628300 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 69 lines ... I0114 01:06:07.849392 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 01:06:07.849405 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-43ed69c3-accf-4353-b7ac-aa2c7bfb8898/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-43ed69c3-accf-4353-b7ac-aa2c7bfb8898","csi.storage.k8s.io/pvc/name":"restored-pvc-tester-qppln-my-volume","csi.storage.k8s.io/pvc/namespace":"snapshotting-64","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-43ed69c3-accf-4353-b7ac-aa2c7bfb8898"} I0114 01:06:07.849551 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-43ed69c3-accf-4353-b7ac-aa2c7bfb8898-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:06:10.963675 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 01:06:10.963696 1 utils.go:79] GRPC request: {} I0114 01:06:10.963733 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 01:06:24.304687 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:06:24.304733 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="a06a49f2-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-43ed69c3-accf-4353-b7ac-aa2c7bfb8898" "latency"=16455109720 I0114 01:06:24.304753 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="a06a49f2-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=16455221420 I0114 01:06:26.136346 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 01:06:26.136387 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 01:06:26.136416 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-43ed69c3-accf-4353-b7ac-aa2c7bfb8898/globalmount with mount options([]) I0114 01:06:26.136427 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 97 lines ... I0114 01:07:50.385366 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d0bfaf98-3b66-43e5-8eaf-2978d50b6316/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-d0bfaf98-3b66-43e5-8eaf-2978d50b6316","csi.storage.k8s.io/pvc/name":"inline-volume-tester-2mw8b-my-volume-0","csi.storage.k8s.io/pvc/namespace":"ephemeral-972","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-d0bfaf98-3b66-43e5-8eaf-2978d50b6316"} I0114 01:07:50.385868 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-d0bfaf98-3b66-43e5-8eaf-2978d50b6316-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:07:59.282673 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 01:07:59.282715 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:07:59.282739 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:07:59.282665 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:07:59.282765 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 01:07:59.286069 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T01:07:59Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="e2d625e7-93a7-11ed-88b1-6045bd9ae695" I0114 01:07:59.295676 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:07:59.296589 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000001/status 200 OK in 10 milliseconds I0114 01:07:59.296802 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="e2d625e7-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=10767107 I0114 01:08:06.672600 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:08:06.672667 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="dd88162a-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d0bfaf98-3b66-43e5-8eaf-2978d50b6316" "latency"=16286729962 I0114 01:08:06.672704 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="dd88162a-93a7-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=16286860762 I0114 01:08:08.395572 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 01:08:08.395623 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 01:08:08.395658 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d0bfaf98-3b66-43e5-8eaf-2978d50b6316/globalmount with mount options([]) I0114 01:08:08.395676 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 86 lines ... I0114 01:09:32.520312 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 01:09:32.520325 1 utils.go:79] GRPC request: {} I0114 01:09:32.520365 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 01:09:32.521016 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 01:09:32.521027 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0b8a281e-2e97-4804-9d0d-171d07235245/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0b8a281e-2e97-4804-9d0d-171d07235245","csi.storage.k8s.io/pvc/name":"pvc-azuredisk","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0b8a281e-2e97-4804-9d0d-171d07235245"} I0114 01:09:32.521182 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-0b8a281e-2e97-4804-9d0d-171d07235245-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:09:38.928172 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:09:38.928246 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="1a68b387-93a8-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0b8a281e-2e97-4804-9d0d-171d07235245" "latency"=6406970506 I0114 01:09:38.928284 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1a68b387-93a8-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=6407113006 I0114 01:09:40.615728 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 01:09:40.615771 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_LRS I0114 01:09:40.615796 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0b8a281e-2e97-4804-9d0d-171d07235245/globalmount with mount options([]) I0114 01:09:40.615803 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 69 lines ... I0114 01:10:34.992368 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-8c369e0c-ceed-403b-9b0c-e9379523b2a8-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:10:40.964153 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 01:10:40.964170 1 utils.go:79] GRPC request: {} I0114 01:10:40.964216 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 01:10:43.199881 1 reflector.go:559] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: Watch close - *v1.CustomResourceDefinition total 6 items received I0114 01:10:43.213404 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=27225&timeout=9m52s&timeoutSeconds=592&watch=true 200 OK in 13 milliseconds I0114 01:10:49.977286 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:10:49.977361 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="3fa50bfd-93a8-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8c369e0c-ceed-403b-9b0c-e9379523b2a8" "latency"=14984931192 I0114 01:10:49.977390 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="3fa50bfd-93a8-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=14985041392 I0114 01:10:50.553502 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 01:10:50.553547 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType StandardSSD_LRS I0114 01:10:50.553576 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8c369e0c-ceed-403b-9b0c-e9379523b2a8/globalmount with mount options([]) I0114 01:10:50.553596 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 59 lines ... I0114 01:11:04.987969 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-eb15dc24-79aa-48f5-b171-881e54b7e562-k8s-agentpool1-35908214-vmss000001-attachment) I0114 01:11:10.964515 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 01:11:10.964536 1 utils.go:79] GRPC request: {} I0114 01:11:10.964661 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 01:11:11.198384 1 reflector.go:559] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.Node total 47 items received I0114 01:11:11.199839 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29212&timeout=6m21s&timeoutSeconds=381&watch=true 200 OK in 1 milliseconds I0114 01:11:28.481675 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:11:28.481744 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000001" "disk.csi.azure.com/request-id"="51860341-93a8-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-eb15dc24-79aa-48f5-b171-881e54b7e562" "latency"=23493686123 I0114 01:11:28.481778 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="51860341-93a8-11ed-88b1-6045bd9ae695" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=23493827324 I0114 01:11:29.083857 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sde under /dev/disk/azure/scsi1/ I0114 01:11:29.083900 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType StandardSSD_LRS I0114 01:11:29.083931 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eb15dc24-79aa-48f5-b171-881e54b7e562/globalmount with mount options([]) I0114 01:11:29.083945 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 150 lines ... I0114 00:20:55.665249 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2860&timeout=7m44s&timeoutSeconds=464&watch=true 200 OK in 1 milliseconds I0114 00:20:55.665833 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=2689&timeout=6m2s&timeoutSeconds=362&watch=true 200 OK in 1 milliseconds I0114 00:20:55.682985 1 shared_informer.go:303] caches populated I0114 00:20:55.683011 1 azuredisk_v2.go:225] driver userAgent: test.csi.azure.com/latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 e2e-test I0114 00:20:55.683022 1 azure_disk_utils.go:564] reading cloud config from secret kube-system/azure-cloud-provider I0114 00:20:55.685247 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/namespaces/kube-system/secrets/azure-cloud-provider 404 Not Found in 2 milliseconds I0114 00:20:55.685571 1 azure_disk_utils.go:571] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0114 00:20:55.685585 1 azure_disk_utils.go:576] could not read cloud config from secret kube-system/azure-cloud-provider I0114 00:20:55.685593 1 azure_disk_utils.go:586] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0114 00:20:55.685622 1 azure_disk_utils.go:594] read cloud config from file: /etc/kubernetes/azure.json successfully I0114 00:20:55.686366 1 azure_auth.go:253] Using AzurePublicCloud environment I0114 00:20:55.686383 1 azure_auth.go:104] azure: using managed identity extension to retrieve access token I0114 00:20:55.686389 1 azure_auth.go:110] azure: using User Assigned MSI ID to retrieve access token ... skipping 53 lines ... I0114 00:20:55.809888 1 round_trippers.go:553] POST https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes 201 Created in 6 milliseconds I0114 00:20:55.813393 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-master-35908214-0/status 200 OK in 3 milliseconds I0114 00:20:55.814187 1 azuredisk_v2.go:554] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-master-35908214-0" "disk.csi.azure.com/request-id"="4fe8d039-93a1-11ed-a37c-6045bd912667" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).registerAzDriverNode" "latency"=11237059 I0114 00:20:55.814423 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:20:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-master-35908214-0" "disk.csi.azure.com/request-id"="4fea903d-93a1-11ed-a37c-6045bd912667" I0114 00:20:55.814983 1 server.go:117] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} I0114 00:20:55.817540 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-master-35908214-0/status 409 Conflict in 2 milliseconds E0114 00:20:55.817720 1 azuredisk_v2.go:613] "msg"="Failed to update AzDriverNode status after creation" "error"="Operation cannot be fulfilled on azdrivernodes.disk.csi.azure.com \"k8s-master-35908214-0\": the object has been modified; please apply your changes to the latest version and try again" "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-master-35908214-0" "disk.csi.azure.com/request-id"="4fea903d-93a1-11ed-a37c-6045bd912667" E0114 00:20:55.817825 1 azuredisk_v2.go:565] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [Operation cannot be fulfilled on azdrivernodes.disk.csi.azure.com \"k8s-master-35908214-0\": the object has been modified; please apply your changes to the latest version and try again]" "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-master-35908214-0" "disk.csi.azure.com/request-id"="4fea903d-93a1-11ed-a37c-6045bd912667" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=3380288 I0114 00:20:55.817839 1 azuredisk_v2.go:528] Starting heartbeat loop with initial delay 5.499s and frequency 30s I0114 00:20:56.135133 1 utils.go:78] GRPC call: /csi.v1.Identity/GetPluginInfo I0114 00:20:56.135156 1 utils.go:79] GRPC request: {} I0114 00:20:56.138185 1 utils.go:85] GRPC response: {"name":"test.csi.azure.com","vendor_version":"latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19"} I0114 00:20:56.812311 1 utils.go:78] GRPC call: /csi.v1.Identity/GetPluginInfo I0114 00:20:56.812343 1 utils.go:79] GRPC request: {} ... skipping 1255 lines ... I0114 00:20:57.097702 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2860&timeout=6m43s&timeoutSeconds=403&watch=true 200 OK in 0 milliseconds I0114 00:20:57.099508 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=2689&timeout=7m35s&timeoutSeconds=455&watch=true 200 OK in 0 milliseconds I0114 00:20:57.139756 1 shared_informer.go:303] caches populated I0114 00:20:57.139793 1 azuredisk_v2.go:225] driver userAgent: test.csi.azure.com/latest-v2-9ef068a8cb36a997d4ea04b90c05c6f92a488a19 e2e-test I0114 00:20:57.139802 1 azure_disk_utils.go:564] reading cloud config from secret kube-system/azure-cloud-provider I0114 00:20:57.142518 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/namespaces/kube-system/secrets/azure-cloud-provider 404 Not Found in 2 milliseconds I0114 00:20:57.142753 1 azure_disk_utils.go:571] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found I0114 00:20:57.142765 1 azure_disk_utils.go:576] could not read cloud config from secret kube-system/azure-cloud-provider I0114 00:20:57.142771 1 azure_disk_utils.go:586] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json I0114 00:20:57.142791 1 azure_disk_utils.go:594] read cloud config from file: /etc/kubernetes/azure.json successfully I0114 00:20:57.143429 1 azure_auth.go:253] Using AzurePublicCloud environment I0114 00:20:57.143445 1 azure_auth.go:104] azure: using managed identity extension to retrieve access token I0114 00:20:57.143449 1 azure_auth.go:110] azure: using User Assigned MSI ID to retrieve access token ... skipping 84 lines ... I0114 00:21:31.189618 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:21:31.189638 1 utils.go:79] GRPC request: {} I0114 00:21:31.189677 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:21:31.190660 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:21:31.190675 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0e05f1cb-bb12-42ad-951b-61344776c30f","csi.storage.k8s.io/pvc/name":"test.csi.azure.combmcgq","csi.storage.k8s.io/pvc/namespace":"provisioning-6443","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f"} I0114 00:21:31.190973 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-0e05f1cb-bb12-42ad-951b-61344776c30f-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:21:37.974263 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:21:37.974336 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="650099a7-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0e05f1cb-bb12-42ad-951b-61344776c30f" "latency"=6783291652 I0114 00:21:37.974367 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="650099a7-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=6783427052 I0114 00:21:39.028759 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:21:39.028775 1 utils.go:79] GRPC request: {} I0114 00:21:39.028810 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:21:39.031979 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 102 lines ... I0114 00:21:52.434601 1 nodeserver_v2.go:353] NodePublishVolume: mount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/globalmount at /var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount successfully I0114 00:21:52.434613 1 utils.go:85] GRPC response: {} I0114 00:21:55.272621 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:21:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="735b29b9-93a1-11ed-af1a-6045bd9ae814" I0114 00:21:55.279065 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:21:55.279290 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="735b29b9-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6727638 I0114 00:21:57.093985 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:21:57.094136 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:21:57.096142 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:21:57.096152 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:21:57.099397 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:21:57.159581 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:21:58.349489 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnpublishVolume I0114 00:21:58.349507 1 utils.go:79] GRPC request: {"target_path":"/var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f"} I0114 00:21:58.349547 1 nodeserver_v2.go:369] NodeUnpublishVolume: unmounting volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f on /var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount I0114 00:21:58.349575 1 mount_helper_common.go:99] "/var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount" is a mountpoint, unmounting I0114 00:21:58.349588 1 mount_linux.go:294] Unmounting /var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount W0114 00:21:58.350720 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount" is not a mountpoint, deleting I0114 00:21:58.350766 1 nodeserver_v2.go:375] NodeUnpublishVolume: unmount volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f on /var/lib/kubelet/pods/cfeacccf-597a-424e-8dda-12e711d1839a/volumes/kubernetes.io~csi/pvc-0e05f1cb-bb12-42ad-951b-61344776c30f/mount successfully I0114 00:21:58.350777 1 utils.go:85] GRPC response: {} I0114 00:21:58.431755 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:21:58.431812 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="69ad42ab-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-02cc5bc5-a40b-4906-a5ed-92d272a939ce" "latency"=19398372619 I0114 00:21:58.431918 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="69ad42ab-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19398541620 I0114 00:21:58.448803 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:21:58.448819 1 utils.go:79] GRPC request: {} I0114 00:21:58.448860 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:21:58.449511 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnstageVolume ... skipping 91 lines ... I0114 00:22:55.225339 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-0455fa90-6bd8-4736-85f2-6a3944483f7d","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-0455fa90-6bd8-4736-85f2-6a3944483f7d","csi.storage.k8s.io/pvc/name":"test.csi.azure.comf9b6c","csi.storage.k8s.io/pvc/namespace":"multivolume-3792","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-0455fa90-6bd8-4736-85f2-6a3944483f7d"} I0114 00:22:55.225583 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-0455fa90-6bd8-4736-85f2-6a3944483f7d-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:22:55.272609 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:22:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="971e6fad-93a1-11ed-af1a-6045bd9ae814" I0114 00:22:55.278069 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:22:55.278670 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="971e6fad-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6106923 I0114 00:22:57.096600 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:22:57.096719 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:22:57.096739 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:22:57.096767 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:22:57.096781 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:22:57.100182 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:22:57.160349 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:23:04.726976 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:23:04.727041 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="9717442d-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0455fa90-6bd8-4736-85f2-6a3944483f7d" "latency"=9501400368 I0114 00:23:04.727101 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="9717442d-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9501537369 I0114 00:23:04.731019 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:23:04.731067 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="96db00bf-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-8af054ed-8728-4d2e-a6cd-9f76e217fe63" "latency"=9900381044 I0114 00:23:04.731091 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="96db00bf-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9900467044 I0114 00:23:05.302605 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:23:05.302653 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:23:05.302675 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:23:05.302667 1 utils.go:85] GRPC response: {} ... skipping 105 lines ... I0114 00:23:55.273148 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:23:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="bae1cb15-93a1-11ed-af1a-6045bd9ae814" I0114 00:23:55.278114 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 4 milliseconds I0114 00:23:55.278279 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="bae1cb15-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5167724 I0114 00:23:57.097952 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:23:57.097980 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:23:57.098137 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:23:57.098328 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:23:57.101327 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:23:57.161520 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:24:10.175584 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:24:10.175661 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b413dc47-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a" "latency"=26318406738 I0114 00:24:10.175695 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b413dc47-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=26318517238 I0114 00:24:10.977897 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:24:10.977916 1 utils.go:79] GRPC request: {} I0114 00:24:10.977961 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:24:12.006257 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdc under /dev/disk/azure/scsi1/ ... skipping 56 lines ... I0114 00:24:15.564028 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:24:15.564041 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-023550e2-453c-4514-97d5-2225c48559a4/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-023550e2-453c-4514-97d5-2225c48559a4","csi.storage.k8s.io/pvc/name":"test.csi.azure.com4vmx4","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-1478","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-023550e2-453c-4514-97d5-2225c48559a4"} I0114 00:24:15.564199 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-023550e2-453c-4514-97d5-2225c48559a4-k8s-agentpool1-35908214-vmss000000-attachment) W0114 00:24:15.564291 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a/globalmount" is not a mountpoint, deleting I0114 00:24:15.564361 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a/globalmount successfully I0114 00:24:15.564376 1 utils.go:85] GRPC response: {} I0114 00:24:25.043493 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:24:25.043557 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="c6f9f6ec-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-023550e2-453c-4514-97d5-2225c48559a4" "latency"=9479299086 I0114 00:24:25.043579 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="c6f9f6ec-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9479409786 I0114 00:24:25.273083 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:24:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="ccc36b5b-93a1-11ed-af1a-6045bd9ae814" I0114 00:24:25.278515 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:24:25.279562 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="ccc36b5b-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6516528 I0114 00:24:25.614599 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ ... skipping 74 lines ... I0114 00:24:58.047837 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:24:58.047854 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a","csi.storage.k8s.io/pvc/name":"test.csi.azure.comrtc5h","csi.storage.k8s.io/pvc/namespace":"snapshotting-8837","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a"} I0114 00:24:58.048021 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:25:10.977169 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:25:10.977190 1 utils.go:79] GRPC request: {} I0114 00:25:10.977234 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:25:21.300568 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:25:21.300640 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e04c7bb7-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a" "latency"=23252556541 I0114 00:25:21.300668 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e04c7bb7-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=23252665941 I0114 00:25:23.053495 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:25:23.053536 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:25:23.053560 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bd63e747-182f-4ba6-8786-2c74e5aa0f7a/globalmount with mount options([]) I0114 00:25:23.053572 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 63 lines ... I0114 00:25:35.803832 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:25:35.803856 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-61c633e3-ea6b-482b-ad0e-84db8f0b4489/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-61c633e3-ea6b-482b-ad0e-84db8f0b4489","csi.storage.k8s.io/pvc/name":"pvc-bf98g","csi.storage.k8s.io/pvc/namespace":"snapshotting-8837","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-61c633e3-ea6b-482b-ad0e-84db8f0b4489"} I0114 00:25:35.804105 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-61c633e3-ea6b-482b-ad0e-84db8f0b4489-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:25:40.978032 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:25:40.978057 1 utils.go:79] GRPC request: {} I0114 00:25:40.978100 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:25:45.289985 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:25:45.290041 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="f6cd999b-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-61c633e3-ea6b-482b-ad0e-84db8f0b4489" "latency"=9485866130 I0114 00:25:45.290062 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="f6cd999b-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=9486005931 I0114 00:25:45.773313 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:25:45.773336 1 utils.go:79] GRPC request: {} I0114 00:25:45.773383 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:25:45.773848 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 51 lines ... I0114 00:25:55.272767 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:25:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="026847c9-93a2-11ed-af1a-6045bd9ae814" I0114 00:25:55.280381 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 7 milliseconds I0114 00:25:55.280557 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="026847c9-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7840335 I0114 00:25:57.100378 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:25:57.100409 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:25:57.100400 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:25:57.100506 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:25:57.100532 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:25:57.103775 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:25:57.163950 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:08.100560 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzVolumeAttachment total 190 items received I0114 00:26:08.103242 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azvolumeattachments?allowWatchBookmarks=true&resourceVersion=5987&timeout=5m15s&timeoutSeconds=315&watch=true 200 OK in 2 milliseconds I0114 00:26:10.977744 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:26:10.977762 1 utils.go:79] GRPC request: {} ... skipping 22 lines ... I0114 00:26:25.273376 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:26:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="144a029a-93a2-11ed-af1a-6045bd9ae814" I0114 00:26:25.279411 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:26:25.279618 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="144a029a-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6287927 I0114 00:26:27.100939 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:27.100997 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:27.101132 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:27.101204 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:26:27.101228 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:26:27.104355 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:27.164307 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:40.978059 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:26:40.978083 1 utils.go:79] GRPC request: {} I0114 00:26:40.978122 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:26:55.273603 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:26:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="262baf20-93a2-11ed-af1a-6045bd9ae814" I0114 00:26:55.279696 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:26:55.279853 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="262baf20-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6279825 I0114 00:26:57.101836 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:57.101916 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:57.101993 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:57.102097 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:26:57.102131 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:26:57.105225 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:26:57.165413 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:09.160984 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 22 items received I0114 00:27:09.163204 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dk8s-agentpool1-35908214-vmss000000&resourceVersion=6682&timeout=5m45s&timeoutSeconds=345&watch=true 200 OK in 2 milliseconds I0114 00:27:10.977837 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:27:10.977857 1 utils.go:79] GRPC request: {} I0114 00:27:10.977892 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:27:25.272932 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:27:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="380d3765-93a2-11ed-af1a-6045bd9ae814" I0114 00:27:25.278030 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 4 milliseconds I0114 00:27:25.278718 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="380d3765-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5821817 I0114 00:27:27.102388 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:27.102442 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:27.102444 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:27.102550 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:27:27.102564 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:27:27.105871 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:27.166065 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:40.098282 1 reflector.go:559] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.Node total 113 items received I0114 00:27:40.103134 1 round_trippers.go:553] GET https://10.0.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=6949&timeout=7m58s&timeoutSeconds=478&watch=true 200 OK in 4 milliseconds I0114 00:27:40.977994 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:27:40.978008 1 utils.go:79] GRPC request: {} I0114 00:27:40.978048 1 utils.go:85] GRPC response: {"ready":{"value":true}} E0114 00:27:45.773637 1 conditionwaiter.go:50] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context canceled]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="fcbf08b0-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" "latency"=119998581549 E0114 00:27:45.773695 1 crdprovisioner.go:743] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context canceled]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="fcbf08b0-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=119998757249 E0114 00:27:45.773711 1 utils.go:83] GRPC error: rpc error: code = Internal desc = failed to wait for volume (/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9) to be attached to node (k8s-agentpool1-35908214-vmss000000): context canceled E0114 00:27:45.773638 1 conditionwaiter.go:50] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context canceled]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="fcbf2050-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" "latency"=119997995347 E0114 00:27:45.773763 1 crdprovisioner.go:743] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context canceled]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="fcbf2050-93a1-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=119998223348 E0114 00:27:45.773779 1 utils.go:83] GRPC error: rpc error: code = Internal desc = failed to wait for volume (/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283) to be attached to node (k8s-agentpool1-35908214-vmss000000): context canceled I0114 00:27:46.326768 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:27:46.326787 1 utils.go:79] GRPC request: {} I0114 00:27:46.326835 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:27:46.326846 1 utils.go:79] GRPC request: {} I0114 00:27:46.326825 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:27:46.326892 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} ... skipping 11 lines ... I0114 00:27:46.329841 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:27:55.272785 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:27:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="49eed243-93a2-11ed-af1a-6045bd9ae814" I0114 00:27:55.278562 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:27:55.278752 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="49eed243-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6073424 I0114 00:27:57.102866 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:57.103014 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:57.103115 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:27:57.103145 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:27:57.103162 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:57.106418 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:27:57.166698 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:10.977295 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:28:10.977321 1 utils.go:79] GRPC request: {} I0114 00:28:10.977368 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:28:21.097104 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzVolume total 160 items received I0114 00:28:21.102035 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azvolumes?allowWatchBookmarks=true&resourceVersion=7204&timeout=5m58s&timeoutSeconds=358&watch=true 200 OK in 4 milliseconds I0114 00:28:25.273252 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:28:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="5bd08a1f-93a2-11ed-af1a-6045bd9ae814" I0114 00:28:25.279493 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:28:25.279667 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="5bd08a1f-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6444425 I0114 00:28:27.103377 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:27.103496 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:28:27.103523 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:27.103524 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:27.103532 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:28:27.106767 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:27.166969 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:32.100705 1 reflector.go:559] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: Watch close - *v1.CustomResourceDefinition total 0 items received I0114 00:28:32.102413 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=2689&timeout=7m56s&timeoutSeconds=476&watch=true 200 OK in 1 milliseconds I0114 00:28:40.977314 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:28:40.977336 1 utils.go:79] GRPC request: {} I0114 00:28:40.977378 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:28:55.273337 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:28:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="6db2307d-93a2-11ed-af1a-6045bd9ae814" I0114 00:28:55.279199 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:28:55.279792 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="6db2307d-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6492425 I0114 00:28:57.103561 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:57.103606 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:57.103660 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:28:57.103698 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:57.103729 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:28:57.107055 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:28:57.167238 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:10.978292 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:29:10.978312 1 utils.go:79] GRPC request: {} I0114 00:29:10.978355 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:29:25.272968 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:29:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="7f93c578-93a2-11ed-af1a-6045bd9ae814" I0114 00:29:25.279798 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:29:25.279959 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="7f93c578-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7004427 I0114 00:29:27.103747 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:27.103816 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:27.103976 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:27.104104 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:29:27.104147 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:29:27.107327 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:27.167543 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:40.977346 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:29:40.977364 1 utils.go:79] GRPC request: {} I0114 00:29:40.977401 1 utils.go:85] GRPC response: {"ready":{"value":true}} E0114 00:29:46.326942 1 conditionwaiter.go:50] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context canceled]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="449a2a1e-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" "latency"=119997558524 E0114 00:29:46.326973 1 conditionwaiter.go:50] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context deadline exceeded]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="449a4073-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" "latency"=119997036022 E0114 00:29:46.326990 1 crdprovisioner.go:743] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context canceled]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="449a2a1e-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=119997725825 E0114 00:29:46.327004 1 crdprovisioner.go:743] "msg"="Workflow completed with an error." "error"="rpc error: code = Internal desc = [context deadline exceeded]" "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="449a4073-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=119997169723 E0114 00:29:46.327008 1 utils.go:83] GRPC error: rpc error: code = Internal desc = failed to wait for volume (/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9) to be attached to node (k8s-agentpool1-35908214-vmss000000): context canceled E0114 00:29:46.327016 1 utils.go:83] GRPC error: rpc error: code = Internal desc = failed to wait for volume (/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283) to be attached to node (k8s-agentpool1-35908214-vmss000000): context deadline exceeded I0114 00:29:47.411984 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:29:47.412011 1 utils.go:79] GRPC request: {} I0114 00:29:47.412057 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:29:47.412824 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:29:47.412838 1 utils.go:79] GRPC request: {} I0114 00:29:47.412874 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} ... skipping 12 lines ... I0114 00:29:55.272741 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:29:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="91755c59-93a2-11ed-af1a-6045bd9ae814" I0114 00:29:55.279565 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:29:55.279965 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="91755c59-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7334824 I0114 00:29:57.104447 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:57.104483 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:57.104510 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:57.104588 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:29:57.104621 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:29:57.107780 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:29:57.167986 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:30:10.977412 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:30:10.977433 1 utils.go:79] GRPC request: {} I0114 00:30:10.977503 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:30:25.272738 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:30:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="a3570290-93a2-11ed-af1a-6045bd9ae814" I0114 00:30:25.280275 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 7 milliseconds I0114 00:30:25.280490 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="a3570290-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7774928 I0114 00:30:25.850959 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:30:25.851021 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="8cc66100-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-be8b71fe-eda4-4770-b36b-5ab89fb3b283" "latency"=38435969117 I0114 00:30:25.851042 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="8cc66100-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=38436068817 I0114 00:30:25.851318 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:30:25.851372 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="8cc631f6-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-5450c7ea-80ba-4e8b-bfff-9cb9efc710c9" "latency"=38437512922 I0114 00:30:25.851395 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="8cc631f6-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=38437627224 I0114 00:30:26.431337 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sde under /dev/disk/azure/scsi1/ I0114 00:30:26.431373 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:30:26.431384 1 utils.go:85] GRPC response: {} I0114 00:30:26.431640 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdc under /dev/disk/azure/scsi1/ ... skipping 81 lines ... I0114 00:30:40.977823 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:30:55.273275 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:30:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b538ba45-93a2-11ed-af1a-6045bd9ae814" I0114 00:30:55.279765 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:30:55.280037 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b538ba45-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6785826 I0114 00:30:57.106012 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:30:57.106027 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:30:57.106131 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:30:57.106147 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:30:57.109426 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:30:57.169639 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:30:57.788538 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:30:57.788609 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="a56c5bf5-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a15f2d2c-f3d7-4efd-b500-998c03639636" "latency"=29020469354 I0114 00:30:57.788639 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="a56c5bf5-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29020571855 I0114 00:30:58.359808 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 00:30:58.359866 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:30:58.359912 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/globalmount with mount options([]) I0114 00:30:58.359926 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 51 lines ... I0114 00:31:25.272792 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:31:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="c71a49e7-93a2-11ed-af1a-6045bd9ae814" I0114 00:31:25.278403 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:31:25.278581 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="c71a49e7-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5837122 I0114 00:31:27.107713 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:27.107960 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:27.107978 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:27.108061 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:31:27.110001 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:27.170222 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:33.371233 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnpublishVolume I0114 00:31:33.371249 1 utils.go:79] GRPC request: {"target_path":"/var/lib/kubelet/pods/52366e11-810e-4791-8756-75b48f7f5586/volumes/kubernetes.io~csi/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/mount","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636"} I0114 00:31:33.371297 1 nodeserver_v2.go:369] NodeUnpublishVolume: unmounting volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636 on /var/lib/kubelet/pods/52366e11-810e-4791-8756-75b48f7f5586/volumes/kubernetes.io~csi/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/mount I0114 00:31:33.371328 1 mount_helper_common.go:99] "/var/lib/kubelet/pods/52366e11-810e-4791-8756-75b48f7f5586/volumes/kubernetes.io~csi/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/mount" is a mountpoint, unmounting I0114 00:31:33.371339 1 mount_linux.go:294] Unmounting /var/lib/kubelet/pods/52366e11-810e-4791-8756-75b48f7f5586/volumes/kubernetes.io~csi/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/mount W0114 00:31:33.372582 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/pods/52366e11-810e-4791-8756-75b48f7f5586/volumes/kubernetes.io~csi/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/mount" is not a mountpoint, deleting I0114 00:31:33.372647 1 nodeserver_v2.go:375] NodeUnpublishVolume: unmount volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636 on /var/lib/kubelet/pods/52366e11-810e-4791-8756-75b48f7f5586/volumes/kubernetes.io~csi/pvc-a15f2d2c-f3d7-4efd-b500-998c03639636/mount successfully I0114 00:31:33.372659 1 utils.go:85] GRPC response: {} I0114 00:31:33.407402 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:31:33.407465 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b76b080f-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-4ec4e10d-01dd-40ca-8623-ce63ed3ffe3f" "latency"=34449037151 I0114 00:31:33.407492 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b76b080f-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=34449143251 I0114 00:31:33.473112 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:31:33.473131 1 utils.go:79] GRPC request: {} I0114 00:31:33.473213 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:31:33.473915 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnstageVolume ... skipping 73 lines ... I0114 00:31:55.272860 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:31:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d8fbefd7-93a2-11ed-af1a-6045bd9ae814" I0114 00:31:55.278560 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:31:55.279229 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d8fbefd7-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6410024 I0114 00:31:57.109502 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:57.109530 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:57.109561 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:57.109613 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:31:57.110830 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:31:57.171013 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:10.977793 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:32:10.977809 1 utils.go:79] GRPC request: {} I0114 00:32:10.977847 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:32:14.300543 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:32:14.300609 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d7729eef-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-40189792-2c30-48c2-8c2e-45b12bc49ad3" "latency"=21605355326 I0114 00:32:14.300636 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="d7729eef-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=21605453527 I0114 00:32:14.874255 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ I0114 00:32:14.874295 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:32:14.874323 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-40189792-2c30-48c2-8c2e-45b12bc49ad3/globalmount with mount options([]) I0114 00:32:14.874340 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 59 lines ... I0114 00:32:25.273632 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:32:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="eaddb06d-93a2-11ed-af1a-6045bd9ae814" I0114 00:32:25.277759 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 3 milliseconds I0114 00:32:25.277951 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="eaddb06d-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=4372217 I0114 00:32:27.110636 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:27.110662 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:27.110687 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:27.110762 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:32:27.111786 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:27.172021 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:40.977229 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:32:40.977253 1 utils.go:79] GRPC request: {} I0114 00:32:40.977298 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:32:54.164126 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 18 items received I0114 00:32:54.171309 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dk8s-agentpool1-35908214-vmss000000&resourceVersion=9645&timeout=5m57s&timeoutSeconds=357&watch=true 200 OK in 7 milliseconds I0114 00:32:55.272981 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:32:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="fcbf3a8c-93a2-11ed-af1a-6045bd9ae814" I0114 00:32:55.277751 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 4 milliseconds I0114 00:32:55.278398 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="fcbf3a8c-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5454521 I0114 00:32:57.110702 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:57.110713 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:57.110829 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:32:57.110877 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:57.112844 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:32:57.173058 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:10.412201 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:33:10.412276 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e9cd5fe5-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-97a509f9-6cc3-4754-8d75-9fa4af6a1da2" "latency"=46923255463 I0114 00:33:10.412300 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e9cd5fe5-93a2-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=46923369963 I0114 00:33:10.978028 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:33:10.978053 1 utils.go:79] GRPC request: {} I0114 00:33:10.978101 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:33:10.986626 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ ... skipping 43 lines ... I0114 00:33:25.272882 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:33:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="0ea0da07-93a3-11ed-af1a-6045bd9ae814" I0114 00:33:25.278133 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:33:25.278672 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="0ea0da07-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5819322 I0114 00:33:27.111016 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:27.111039 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:27.111064 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:27.111185 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:33:27.113376 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:27.173583 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:40.978350 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:33:40.978367 1 utils.go:79] GRPC request: {} I0114 00:33:40.978411 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:33:46.126230 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:33:46.126289 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="065a52b6-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-dfe124a6-35d5-4f70-b651-521dfd1f95f0" "latency"=34737349702 I0114 00:33:46.126308 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="065a52b6-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=34737455704 I0114 00:33:46.702720 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:33:46.702760 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:33:46.702783 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-dfe124a6-35d5-4f70-b651-521dfd1f95f0/globalmount with mount options([]) I0114 00:33:46.702791 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 42 lines ... I0114 00:33:55.273285 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:33:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="20828bcc-93a3-11ed-af1a-6045bd9ae814" I0114 00:33:55.278792 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:33:55.278966 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="20828bcc-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5735424 I0114 00:33:57.111327 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:57.111483 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:57.111498 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:57.111592 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:33:57.113545 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:33:57.173676 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:02.200965 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:34:02.200988 1 utils.go:79] GRPC request: {} I0114 00:34:02.201030 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:34:02.201711 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetVolumeStats I0114 00:34:02.201727 1 utils.go:79] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-97a509f9-6cc3-4754-8d75-9fa4af6a1da2","volume_path":"/var/lib/kubelet/pods/78c42266-1cd0-4880-89b2-14ab1a497618/volumes/kubernetes.io~csi/pvc-97a509f9-6cc3-4754-8d75-9fa4af6a1da2/mount"} I0114 00:34:02.201803 1 utils.go:85] GRPC response: {"usage":[{"available":5179580416,"total":5196382208,"unit":1,"used":24576},{"available":327668,"total":327680,"unit":2,"used":12}]} I0114 00:34:10.977611 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:34:10.977635 1 utils.go:79] GRPC request: {} I0114 00:34:10.977677 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:34:16.510061 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:34:16.510129 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="1ba65c62-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-576dd22b-41d5-47c9-8b30-58144531879a" "latency"=29390712155 I0114 00:34:16.510159 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1ba65c62-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29390824157 I0114 00:34:17.090175 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sde under /dev/disk/azure/scsi1/ I0114 00:34:17.090219 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:34:17.090252 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-576dd22b-41d5-47c9-8b30-58144531879a/globalmount with mount options([]) I0114 00:34:17.090265 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 77 lines ... I0114 00:34:55.273254 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:34:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="4445cea4-93a3-11ed-af1a-6045bd9ae814" I0114 00:34:55.278924 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:34:55.279559 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="4445cea4-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6406725 I0114 00:34:57.113501 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:57.113545 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:57.113583 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:57.113686 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:34:57.114763 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:34:57.174942 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:35:10.977814 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:35:10.977834 1 utils.go:79] GRPC request: {} I0114 00:35:10.977870 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:35:22.856454 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:35:22.856521 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="3efb3995-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0eed43ec-9c9c-4b6a-b9fe-03593755ff21" "latency"=36460684985 I0114 00:35:22.856551 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="3efb3995-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=36460795985 I0114 00:35:23.435653 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun3 by sde under /dev/disk/azure/scsi1/ I0114 00:35:23.435694 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun3. perfProfile none accountType I0114 00:35:23.435723 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun3 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0eed43ec-9c9c-4b6a-b9fe-03593755ff21/globalmount with mount options([]) I0114 00:35:23.435736 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun3" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun3]) ... skipping 152 lines ... I0114 00:36:55.273260 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:36:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="8bcc5d21-93a3-11ed-af1a-6045bd9ae814" I0114 00:36:55.279226 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:36:55.280131 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="8bcc5d21-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6911729 I0114 00:36:57.115851 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:57.115877 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:57.115897 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:57.116004 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:36:57.116008 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:36:57.116027 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:36:57.176473 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:37:04.524734 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:37:04.524815 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="7c49d79b-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-bf5d722f-a3b4-4e26-9d57-cccefefd86f4" "latency"=35272706370 I0114 00:37:04.524849 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7c49d79b-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=35272850870 I0114 00:37:04.528586 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:37:04.528639 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="7c49e4e1-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-4b6cb7cd-6397-4718-8472-79f1c6b92f49" "latency"=35276228984 I0114 00:37:04.528663 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7c49e4e1-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=35276329785 I0114 00:37:05.097564 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sde under /dev/disk/azure/scsi1/ I0114 00:37:05.097604 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:37:05.097615 1 utils.go:85] GRPC response: {} I0114 00:37:05.100769 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 117 lines ... I0114 00:38:14.018108 1 nodeprovisioner.go:164] EnsureBlockTargetReady [block]: making target file /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef/088ba254-fb66-43e3-9231-7093f4ea44cf I0114 00:38:14.018165 1 nodeserver_v2.go:348] NodePublishVolume: mounting /dev/disk/azure/scsi1/lun0 at /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef/088ba254-fb66-43e3-9231-7093f4ea44cf I0114 00:38:14.018184 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef/088ba254-fb66-43e3-9231-7093f4ea44cf) I0114 00:38:14.019056 1 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /dev/disk/azure/scsi1/lun0 /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef/088ba254-fb66-43e3-9231-7093f4ea44cf) I0114 00:38:14.019814 1 nodeserver_v2.go:353] NodePublishVolume: mount /dev/disk/azure/scsi1/lun0 at /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/pvc-73d8a511-be27-412a-8dfd-7b6d222fb4ef/088ba254-fb66-43e3-9231-7093f4ea44cf successfully I0114 00:38:14.019828 1 utils.go:85] GRPC response: {} I0114 00:38:18.172122 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:38:18.172179 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b8ac9b04-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-7607c5c6-1250-4205-8a13-c143094d1807" "latency"=7609546751 I0114 00:38:18.172208 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b8ac9b04-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=7609657551 I0114 00:38:18.757663 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 00:38:18.757704 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:38:18.757736 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7607c5c6-1250-4205-8a13-c143094d1807/globalmount with mount options([]) I0114 00:38:18.757750 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 92 lines ... I0114 00:38:55.277527 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 4 milliseconds I0114 00:38:55.277752 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d352d207-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5130319 I0114 00:38:57.121323 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:57.121373 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:57.121428 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:57.121489 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:38:57.121525 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:38:57.178775 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:04.669744 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:39:04.669804 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d04cc347-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-7607c5c6-1250-4205-8a13-c143094d1807" "latency"=14469984317 I0114 00:39:04.669826 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="d04cc347-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=14470078217 I0114 00:39:06.261314 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:39:06.261356 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:39:06.261387 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7607c5c6-1250-4205-8a13-c143094d1807/globalmount with mount options([]) I0114 00:39:06.261402 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 61 lines ... I0114 00:39:25.273538 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:39:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e53496eb-93a3-11ed-af1a-6045bd9ae814" I0114 00:39:25.279126 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:39:25.279805 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e53496eb-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6327322 I0114 00:39:27.122090 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:27.122136 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:27.122179 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:27.122215 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:39:27.122239 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:27.179479 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:29.043250 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:39:29.043336 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="db56cf56-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-0313d2f1-ffe3-4f75-9aa9-33cecec75a80" "latency"=20322709022 I0114 00:39:29.043376 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="db56cf56-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=20322844822 I0114 00:39:29.610667 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 00:39:29.610707 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:39:29.610731 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0313d2f1-ffe3-4f75-9aa9-33cecec75a80/globalmount with mount options([]) I0114 00:39:29.610738 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 62 lines ... I0114 00:39:55.280760 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 7 milliseconds I0114 00:39:55.280968 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="f7162c78-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7824131 I0114 00:39:57.123692 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:57.123726 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:57.123705 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:57.123728 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:39:57.123841 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:39:57.179996 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:10.977512 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:40:10.977531 1 utils.go:79] GRPC request: {} I0114 00:40:10.977576 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:40:25.272555 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:40:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="08f7b72e-93a4-11ed-af1a-6045bd9ae814" I0114 00:40:25.278405 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:40:25.278679 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="08f7b72e-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6142121 I0114 00:40:27.124681 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:27.124700 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:27.124718 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:27.124745 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:27.124788 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:40:27.181114 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:40.027980 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:40:40.028049 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e7e1597e-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-7c9e913a-3e4c-475d-9c04-5f6a99d5f4cc" "latency"=70266655117 I0114 00:40:40.028080 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e7e1597e-93a3-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=70266952518 I0114 00:40:40.977895 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:40:40.977916 1 utils.go:79] GRPC request: {} I0114 00:40:40.977965 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:40:41.242133 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 60 lines ... I0114 00:40:55.279324 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:40:55.279535 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="1ad979c3-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6203130 I0114 00:40:57.125694 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:57.125727 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:57.125740 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:57.125722 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:40:57.125835 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:40:57.182054 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:41:00.422916 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:41:00.422979 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="127cb920-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d34d35b6-e89e-4791-804e-83004f8a6b2d" "latency"=19179192123 I0114 00:41:00.423009 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="127cb920-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=19179317524 I0114 00:41:01.000968 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:41:01.001006 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:41:01.001016 1 utils.go:85] GRPC response: {} I0114 00:41:01.006871 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 49 lines ... W0114 00:41:35.649223 1 mount_helper_common.go:133] Warning: "/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-d34d35b6-e89e-4791-804e-83004f8a6b2d" is not a mountpoint, deleting I0114 00:41:35.649277 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-d34d35b6-e89e-4791-804e-83004f8a6b2d successfully I0114 00:41:35.649284 1 utils.go:85] GRPC response: {} I0114 00:41:40.978045 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:41:40.978070 1 utils.go:79] GRPC request: {} I0114 00:41:40.978115 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:41:47.486296 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:41:47.486358 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="3251a2e7-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-b18f6830-2625-4a08-94c0-0f16c7005d13" "latency"=12837851040 I0114 00:41:47.486397 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="3251a2e7-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=12837987140 I0114 00:41:48.054031 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:41:48.054065 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:41:48.054091 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b18f6830-2625-4a08-94c0-0f16c7005d13/globalmount with mount options([]) I0114 00:41:48.054099 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 60 lines ... I0114 00:41:57.106911 1 nodeserver_v2.go:262] NodeUnstageVolume: unmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b18f6830-2625-4a08-94c0-0f16c7005d13/globalmount successfully I0114 00:41:57.106927 1 utils.go:85] GRPC response: {} I0114 00:41:57.126426 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:41:57.126444 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:41:57.126475 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:41:57.126483 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:41:57.126608 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:41:57.182759 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:10.977796 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:42:10.977814 1 utils.go:79] GRPC request: {} I0114 00:42:10.977857 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:42:23.074776 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:42:23.074849 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="3e8fd448-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-57b7afb3-ff31-4dcf-b5ee-44ba92c3892e" "latency"=27886111333 I0114 00:42:23.074877 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="3e8fd448-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=27886225134 I0114 00:42:23.698461 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ I0114 00:42:23.698502 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun2. perfProfile none accountType I0114 00:42:23.698534 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun2 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-57b7afb3-ff31-4dcf-b5ee-44ba92c3892e/globalmount with mount options([]) I0114 00:42:23.698545 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun2" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun2]) ... skipping 140 lines ... I0114 00:42:55.278572 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 4 milliseconds I0114 00:42:55.279351 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="626009cf-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5912222 I0114 00:42:57.126728 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:57.126774 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:57.126778 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:57.126794 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:42:57.126875 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:42:57.183182 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:03.679059 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:43:03.679111 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="570bafe1-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-24a5f487-0bdc-477f-a514-93f5e69b2c58" "latency"=27413350174 I0114 00:43:03.679131 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="570bafe1-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=27413447074 I0114 00:43:04.261068 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:43:04.261109 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:43:04.261134 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-24a5f487-0bdc-477f-a514-93f5e69b2c58/globalmount with mount options([]) I0114 00:43:04.261142 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 73 lines ... I0114 00:43:55.279657 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:43:55.279790 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="862335a2-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7025727 I0114 00:43:57.128015 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:57.128036 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:57.128067 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:57.128026 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:43:57.128128 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:43:57.184409 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:44:04.556090 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:44:04.556165 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="81b04cb8-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-f408fdb6-5db4-454b-b32a-bbb626b32ef9" "latency"=16747271272 I0114 00:44:04.556198 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="81b04cb8-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=16747395972 I0114 00:44:06.278053 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:44:06.278106 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:44:06.278138 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f408fdb6-5db4-454b-b32a-bbb626b32ef9/globalmount with mount options([]) I0114 00:44:06.278149 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 71 lines ... I0114 00:44:39.243966 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:44:39.243982 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643","csi.storage.k8s.io/pvc/name":"test.csi.azure.comq4w78","csi.storage.k8s.io/pvc/namespace":"snapshotting-441","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643"} I0114 00:44:39.244115 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:44:40.977488 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:44:40.977514 1 utils.go:79] GRPC request: {} I0114 00:44:40.977557 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:44:52.565410 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:44:52.565483 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="a058b42c-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643" "latency"=13321291364 I0114 00:44:52.565503 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="a058b42c-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=13321406765 I0114 00:44:53.144621 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:44:53.144660 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:44:53.144689 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643/globalmount with mount options([]) I0114 00:44:53.144705 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 81 lines ... I0114 00:45:31.472124 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:45:31.472138 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643","csi.storage.k8s.io/pvc/name":"test.csi.azure.comq4w78","csi.storage.k8s.io/pvc/namespace":"snapshotting-441","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643"} I0114 00:45:31.472305 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:45:40.977750 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:45:40.977781 1 utils.go:79] GRPC request: {} I0114 00:45:40.977825 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:45:46.610656 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:45:46.610734 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="bf7a17ab-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643" "latency"=15138363756 I0114 00:45:46.610771 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="bf7a17ab-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=15138476856 I0114 00:45:48.325878 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:45:48.325927 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:45:48.325962 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1fe58b76-30c9-4535-9bf0-ff579dc4f643/globalmount with mount options([]) I0114 00:45:48.325975 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 70 lines ... I0114 00:46:25.278044 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 4 milliseconds I0114 00:46:25.278644 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="df8b6e8a-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5584222 I0114 00:46:27.131392 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:27.131417 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:27.131446 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:27.131495 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:27.131532 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:46:27.188675 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:40.978069 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:46:40.978089 1 utils.go:79] GRPC request: {} I0114 00:46:40.978132 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:46:55.273387 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:46:55Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="f16d1e71-93a4-11ed-af1a-6045bd9ae814" I0114 00:46:55.279875 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 6 milliseconds I0114 00:46:55.280058 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="f16d1e71-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6698725 I0114 00:46:57.131731 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:57.131945 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:57.131960 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:57.132029 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:46:57.132067 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:46:57.189345 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:47:07.527188 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:47:07.527248 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d7117634-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-a87dde58-753f-4291-baae-e91067ea93d8" "latency"=56475251487 I0114 00:47:07.527275 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="d7117634-93a4-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=56475377887 I0114 00:47:09.282817 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:47:09.282865 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:47:09.282895 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a87dde58-753f-4291-baae-e91067ea93d8/globalmount with mount options([]) I0114 00:47:09.282910 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 84 lines ... I0114 00:48:25.278253 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:48:25.278819 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="2711ea7c-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6202124 I0114 00:48:27.134334 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:27.134378 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:27.134384 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:27.134412 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:27.134445 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:48:27.191749 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:34.173569 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:48:34.173628 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="1fd69877-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-6ed84908-1dcd-4777-98a9-5254e19c7e79" "latency"=21033756956 I0114 00:48:34.173647 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1fd69877-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=21033862457 I0114 00:48:35.813103 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:48:35.813143 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:48:35.813171 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6ed84908-1dcd-4777-98a9-5254e19c7e79/globalmount with mount options([]) I0114 00:48:35.813186 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 75 lines ... I0114 00:48:55.279504 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:48:55.279697 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="38f3acaa-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6290624 I0114 00:48:57.135395 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:57.135556 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:57.135610 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:57.135650 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:48:57.135762 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:48:57.135781 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:48:57.192835 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:49:10.977506 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:49:10.977529 1 utils.go:79] GRPC request: {} I0114 00:49:10.977575 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:49:15.577194 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:49:15.577267 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="3805691a-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-393a9fa0-9970-4e65-9637-01019d99a6a8" "latency"=21865255494 I0114 00:49:15.577309 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="3805691a-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=21865391095 I0114 00:49:15.585278 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:49:15.585354 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="380568c8-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-f7aa9d47-cc4e-4952-9bc9-964f9f4712ae" "latency"=21873311221 I0114 00:49:15.585383 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="380568c8-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=21873476121 I0114 00:49:16.193695 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdc under /dev/disk/azure/scsi1/ I0114 00:49:16.193695 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun2 by sdd under /dev/disk/azure/scsi1/ I0114 00:49:16.193744 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:49:16.193771 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-393a9fa0-9970-4e65-9637-01019d99a6a8/globalmount with mount options([]) ... skipping 137 lines ... I0114 00:50:46.122911 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:50:46.122922 1 utils.go:79] GRPC request: {} I0114 00:50:46.122944 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:50:46.123608 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:50:46.123622 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbed08e5-aa10-449e-ba7e-1943b236261f/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-fbed08e5-aa10-449e-ba7e-1943b236261f","csi.storage.k8s.io/pvc/name":"test.csi.azure.comxkv49","csi.storage.k8s.io/pvc/namespace":"provisioning-5244","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-fbed08e5-aa10-449e-ba7e-1943b236261f"} I0114 00:50:46.123791 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-fbed08e5-aa10-449e-ba7e-1943b236261f-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:50:49.795775 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:50:49.795855 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="7b061986-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-fbed08e5-aa10-449e-ba7e-1943b236261f" "latency"=3671989828 I0114 00:50:49.795885 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="7b061986-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=3672104428 I0114 00:50:51.557824 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:50:51.557875 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:50:51.557909 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbed08e5-aa10-449e-ba7e-1943b236261f/globalmount with mount options([]) I0114 00:50:51.557926 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 110 lines ... I0114 00:52:25.278887 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:52:25.279981 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b61f2206-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6564823 I0114 00:52:27.137670 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:27.137729 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:27.137653 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:27.137792 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:27.138000 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:52:27.138122 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:52:27.196355 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:52:40.977164 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:52:40.977181 1 utils.go:79] GRPC request: {} I0114 00:52:40.977223 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:52:52.503240 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:52:52.503316 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b1c7586a-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3" "latency"=34516013589 I0114 00:52:52.503350 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b1c7586a-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=34516150189 I0114 00:52:52.508588 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:52:52.508655 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="b1c72fc8-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-5159bd24-0c83-4c5d-8c8e-133ae3a4a055" "latency"=34522385512 I0114 00:52:52.508696 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="b1c72fc8-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=34522524213 I0114 00:52:53.074225 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdd under /dev/disk/azure/scsi1/ I0114 00:52:53.074268 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:52:53.074293 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3/globalmount with mount options([]) I0114 00:52:53.074302 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 119 lines ... I0114 00:53:47.195897 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:53:47.195910 1 utils.go:79] GRPC request: {} I0114 00:53:47.195941 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:53:47.196793 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:53:47.196801 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3","csi.storage.k8s.io/pvc/name":"test.csi.azure.comb5h64","csi.storage.k8s.io/pvc/namespace":"multivolume-7926","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3"} I0114 00:53:47.196972 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:53:52.052117 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:53:52.052193 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e6f3ac0c-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-9553e943-7fc5-4a9a-a453-064a8ebe34d3" "latency"=4855158876 I0114 00:53:52.052233 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e6f3ac0c-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=4855278277 I0114 00:53:52.052194 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:53:52.052363 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="e6f35aea-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-5159bd24-0c83-4c5d-8c8e-133ae3a4a055" "latency"=4857396885 I0114 00:53:52.052394 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="e6f35aea-93a5-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=4857525786 I0114 00:53:52.622786 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:53:52.622835 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:53:52.622852 1 utils.go:85] GRPC response: {} I0114 00:53:52.622975 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ ... skipping 120 lines ... I0114 00:55:11.736352 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:55:11.736368 1 utils.go:79] GRPC request: {} I0114 00:55:11.736399 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:55:11.736979 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:55:11.736996 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-ae1f1343-5942-49f2-af27-671e12479f1c","csi.storage.k8s.io/pvc/name":"test.csi.azure.com8tgmb","csi.storage.k8s.io/pvc/namespace":"fsgroupchangepolicy-6095","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ae1f1343-5942-49f2-af27-671e12479f1c"} I0114 00:55:11.737182 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-ae1f1343-5942-49f2-af27-671e12479f1c-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:55:14.654065 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:55:14.654144 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="19577bee-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-ae1f1343-5942-49f2-af27-671e12479f1c" "latency"=2916877786 I0114 00:55:14.654186 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="19577bee-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=2917029987 I0114 00:55:15.059115 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:55:15.059132 1 utils.go:79] GRPC request: {} I0114 00:55:15.059167 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:55:15.059856 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 37 lines ... I0114 00:55:25.272670 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T00:55:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="2168d57b-93a6-11ed-af1a-6045bd9ae814" I0114 00:55:25.278000 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:55:25.278213 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="2168d57b-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=5582117 I0114 00:55:27.145249 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:27.145289 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:27.145352 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:27.145362 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:55:27.145379 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:27.199630 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:27.653267 1 utils.go:78] GRPC call: /csi.v1.Node/NodeUnpublishVolume I0114 00:55:27.653295 1 utils.go:79] GRPC request: {"target_path":"/var/lib/kubelet/pods/0707bf8f-4aee-4bb3-afd0-d880f63defb4/volumes/kubernetes.io~csi/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/mount","volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ae1f1343-5942-49f2-af27-671e12479f1c"} I0114 00:55:27.653353 1 nodeserver_v2.go:369] NodeUnpublishVolume: unmounting volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-ae1f1343-5942-49f2-af27-671e12479f1c on /var/lib/kubelet/pods/0707bf8f-4aee-4bb3-afd0-d880f63defb4/volumes/kubernetes.io~csi/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/mount I0114 00:55:27.653388 1 mount_helper_common.go:99] "/var/lib/kubelet/pods/0707bf8f-4aee-4bb3-afd0-d880f63defb4/volumes/kubernetes.io~csi/pvc-ae1f1343-5942-49f2-af27-671e12479f1c/mount" is a mountpoint, unmounting ... skipping 19 lines ... I0114 00:55:55.278787 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 00:55:55.278968 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="334a776d-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6361624 I0114 00:55:57.146978 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:57.147046 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:57.147087 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:57.147126 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:55:57.147254 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 00:55:57.200294 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 00:56:00.071700 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:56:00.071763 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="1b52a1ed-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-6a2ec275-b917-4d0e-bb51-2c7d18391605" "latency"=45010887243 I0114 00:56:00.071785 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="1b52a1ed-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=45010989744 I0114 00:56:00.646238 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 00:56:00.646288 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 00:56:00.646301 1 utils.go:85] GRPC response: {} I0114 00:56:00.651621 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 46 lines ... I0114 00:56:43.458644 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 00:56:43.458663 1 utils.go:79] GRPC request: {} I0114 00:56:43.458716 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 00:56:43.459662 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 00:56:43.459681 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/staging/pvc-41cf2e30-c213-47b6-b8b3-588a4dcac589","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-41cf2e30-c213-47b6-b8b3-588a4dcac589","csi.storage.k8s.io/pvc/name":"pvc-ccb78","csi.storage.k8s.io/pvc/namespace":"provisioning-7514","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-41cf2e30-c213-47b6-b8b3-588a4dcac589"} I0114 00:56:43.459845 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-41cf2e30-c213-47b6-b8b3-588a4dcac589-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:56:48.396509 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:56:48.396574 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="500340dc-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-41cf2e30-c213-47b6-b8b3-588a4dcac589" "latency"=4936654597 I0114 00:56:48.396606 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="500340dc-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=4936778397 I0114 00:56:50.042875 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:56:50.042923 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:56:50.042937 1 utils.go:85] GRPC response: {} I0114 00:56:50.047441 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities ... skipping 97 lines ... I0114 00:58:36.987597 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c-k8s-agentpool1-35908214-vmss000000-attachment) I0114 00:58:40.977222 1 utils.go:78] GRPC call: /csi.v1.Identity/Probe I0114 00:58:40.977247 1 utils.go:79] GRPC request: {} I0114 00:58:40.977294 1 utils.go:85] GRPC response: {"ready":{"value":true}} I0114 00:58:46.120418 1 reflector.go:559] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: Watch close - *v1beta2.AzDriverNode total 64 items received I0114 00:58:46.122965 1 round_trippers.go:553] GET https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/azdrivernodes?allowWatchBookmarks=true&resourceVersion=24805&timeout=9m22s&timeoutSeconds=562&watch=true 200 OK in 2 milliseconds I0114 00:58:51.940329 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:58:51.940386 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="93ae36e8-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-d66cbf0e-9885-4d5a-b6c2-807b873aaf8c" "latency"=14952737347 I0114 00:58:51.940406 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="93ae36e8-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=14952827547 I0114 00:58:51.947203 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 00:58:51.947263 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="93907064-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-89f28cec-b6ae-4d85-9729-a028acff3ebc" "latency"=15154732995 I0114 00:58:51.947298 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="93907064-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=15154848995 I0114 00:58:52.526999 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 00:58:52.527051 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType I0114 00:58:52.527076 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-89f28cec-b6ae-4d85-9729-a028acff3ebc/globalmount with mount options([]) I0114 00:58:52.527084 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 155 lines ... I0114 01:00:18.014249 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"1"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-266cf92a-ddad-4a43-8712-6dc83a986e65/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-266cf92a-ddad-4a43-8712-6dc83a986e65","csi.storage.k8s.io/pvc/name":"test.csi.azure.comzcw2s","csi.storage.k8s.io/pvc/namespace":"multivolume-1515","requestedsizegib":"5","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-266cf92a-ddad-4a43-8712-6dc83a986e65"} I0114 01:00:18.014401 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-266cf92a-ddad-4a43-8712-6dc83a986e65-k8s-agentpool1-35908214-vmss000000-attachment) I0114 01:00:25.273002 1 azuredisk_v2.go:578] "msg"="Updating heartbeat" "LastHeartbeatTime"="2023-01-14T01:00:25Z" "ReadyForVolumeAllocation"=true "StatusMessage"="Driver node healthy." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d4393d5a-93a6-11ed-af1a-6045bd9ae814" I0114 01:00:25.280634 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 7 milliseconds I0114 01:00:25.280833 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="d4393d5a-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=7948032 I0114 01:00:27.154373 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:00:27.154459 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 01:00:27.154480 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 01:00:27.154491 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:00:27.154366 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 01:00:27.154518 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:00:27.205893 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:00:30.016786 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:00:30.016861 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="cfe5a26e-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-bbf0c35e-3a0f-4357-82e5-cfa417e5d363" "latency"=12002663431 I0114 01:00:30.016911 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="cfe5a26e-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=12002829831 I0114 01:00:30.022159 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:00:30.022205 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="cfe5ae80-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-266cf92a-ddad-4a43-8712-6dc83a986e65" "latency"=12007760256 I0114 01:00:30.022225 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="cfe5ae80-93a6-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=12007841756 I0114 01:00:30.611928 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 01:00:30.611980 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType I0114 01:00:30.612013 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-266cf92a-ddad-4a43-8712-6dc83a986e65/globalmount with mount options([nouuid]) I0114 01:00:30.612031 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 321 lines ... I0114 01:09:49.944726 1 utils.go:78] GRPC call: /csi.v1.Node/NodeGetCapabilities I0114 01:09:49.944740 1 utils.go:79] GRPC request: {} I0114 01:09:49.944769 1 utils.go:85] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]} I0114 01:09:49.945427 1 utils.go:78] GRPC call: /csi.v1.Node/NodeStageVolume I0114 01:09:49.945439 1 utils.go:79] GRPC request: {"publish_context":{"LUN":"0"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fc545277-266d-4271-9e44-b44e40e8f3ed/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":7}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-fc545277-266d-4271-9e44-b44e40e8f3ed","csi.storage.k8s.io/pvc/name":"persistent-storage-statefulset-azuredisk-0","csi.storage.k8s.io/pvc/namespace":"default","requestedsizegib":"10","skuName":"StandardSSD_LRS","storage.kubernetes.io/csiProvisionerIdentity":"1673655664912-8081-test.csi.azure.com"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rpwnaldb/providers/Microsoft.Compute/disks/pvc-fc545277-266d-4271-9e44-b44e40e8f3ed"} I0114 01:09:49.945671 1 conditionwatcher.go:99] Adding a condition function for azvolumeattachments (pvc-fc545277-266d-4271-9e44-b44e40e8f3ed-k8s-agentpool1-35908214-vmss000000-attachment) I0114 01:09:53.558910 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:09:53.558982 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="24cb77a7-93a8-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-fc545277-266d-4271-9e44-b44e40e8f3ed" "latency"=3613232861 I0114 01:09:53.559015 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="24cb77a7-93a8-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=3613361161 I0114 01:09:55.203533 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun0 by sdc under /dev/disk/azure/scsi1/ I0114 01:09:55.203573 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun0. perfProfile none accountType StandardSSD_LRS I0114 01:09:55.203597 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun0 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fc545277-266d-4271-9e44-b44e40e8f3ed/globalmount with mount options([]) I0114 01:09:55.203605 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun0" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun0]) ... skipping 80 lines ... I0114 01:11:25.279449 1 round_trippers.go:553] PUT https://10.0.0.1:443/apis/disk.csi.azure.com/v1beta2/namespaces/azure-disk-csi/azdrivernodes/k8s-agentpool1-35908214-vmss000000/status 200 OK in 5 milliseconds I0114 01:11:25.280247 1 azuredisk_v2.go:565] "msg"="Workflow completed with success." "disk.csi.azure.com/namespace"="azure-disk-csi" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="5d9d5279-93a8-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).updateAzDriverNodeHeartbeat" "latency"=6872726 I0114 01:11:27.176127 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:11:27.176150 1 reflector.go:281] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:132: forcing resync I0114 01:11:27.176169 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:11:27.176175 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:11:27.176237 1 conditionwatcher.go:173] condition result: succeeded: false, error: <nil> I0114 01:11:27.218444 1 reflector.go:281] sigs.k8s.io/azuredisk-csi-driver/pkg/apis/client/informers/externalversions/factory.go:132: forcing resync I0114 01:11:31.082033 1 conditionwatcher.go:173] condition result: succeeded: true, error: <nil> I0114 01:11:31.082103 1 conditionwaiter.go:50] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).waitForLunOrAttach" "disk.csi.azure.com/node-name"="k8s-agentpool1-35908214-vmss000000" "disk.csi.azure.com/request-id"="4f52881b-93a8-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requested-role"="Primary" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/watcher.(*ConditionWaiter).Wait" "disk.csi.azure.com/volume-name"="pvc-46b8b57e-eaf1-4ca3-a90d-78e4132f75a6" "latency"=29786884906 I0114 01:11:31.082128 1 crdprovisioner.go:743] "msg"="Workflow completed with success." "caller"="sigs.k8s.io/azuredisk-csi-driver/pkg/azuredisk.(*DriverV2).NodeStageVolume" "disk.csi.azure.com/request-id"="4f52881b-93a8-11ed-af1a-6045bd9ae814" "disk.csi.azure.com/requester-name"="sigs.k8s.io/azuredisk-csi-driver/pkg/provisioner.(*CrdProvisioner).WaitForAttach" "latency"=29787008707 I0114 01:11:31.670287 1 nodeprovisioner_unix.go:250] azureDisk - found /dev/disk/azure/scsi1/lun1 by sdd under /dev/disk/azure/scsi1/ I0114 01:11:31.670332 1 nodeserver_v2.go:170] NodeStageVolume: perf optimization is disabled for /dev/disk/azure/scsi1/lun1. perfProfile none accountType StandardSSD_LRS I0114 01:11:31.670365 1 nodeserver_v2.go:211] NodeStageVolume: formatting /dev/disk/azure/scsi1/lun1 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-46b8b57e-eaf1-4ca3-a90d-78e4132f75a6/globalmount with mount options([]) I0114 01:11:31.670386 1 mount_linux.go:487] Attempting to determine if disk "/dev/disk/azure/scsi1/lun1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/azure/scsi1/lun1]) ... skipping 520 lines ... # HELP go_gc_heap_objects_objects Number of objects, live or unswept, occupying heap memory. # TYPE go_gc_heap_objects_objects gauge go_gc_heap_objects_objects 107392 # HELP go_gc_heap_tiny_allocs_objects_total Count of small allocations that are packed together into blocks. These allocations are counted separately from other allocations because each individual allocation is not tracked by the runtime, only their block. Each block is already accounted for in allocs-by-size and frees-by-size. # TYPE go_gc_heap_tiny_allocs_objects_total counter go_gc_heap_tiny_allocs_objects_total 469996 # HELP go_gc_limiter_last_enabled_gc_cycle GC cycle the last time the GC CPU limiter was enabled. This metric is useful for diagnosing the root cause of an out-of-memory error, because the limiter trades memory for CPU time when the GC's CPU time gets too high. This is most likely to occur with use of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates that it was never enabled. # TYPE go_gc_limiter_last_enabled_gc_cycle gauge go_gc_limiter_last_enabled_gc_cycle 0 # HELP go_gc_pauses_seconds Distribution individual GC-related stop-the-world pause latencies. # TYPE go_gc_pauses_seconds histogram go_gc_pauses_seconds_bucket{le="9.999999999999999e-10"} 0 go_gc_pauses_seconds_bucket{le="9.999999999999999e-09"} 0 ... skipping 175 lines ... # HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes. # TYPE process_virtual_memory_max_bytes gauge process_virtual_memory_max_bytes 1.8446744073709552e+19 # HELP registered_metric_total [ALPHA] The count of registered metrics broken by stability level and deprecation version. # TYPE registered_metric_total counter registered_metric_total{deprecated_version="",stability_level="ALPHA"} 16 make: *** [Makefile:341: e2e-test] Error 1 2023/01/14 01:13:01 process.go:155: Step 'make e2e-test' finished in 1h5m57.229760526s 2023/01/14 01:13:01 aksengine_helpers.go:425: downloading /root/tmp3639031375/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/14 01:13:01 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/14 01:13:01 process.go:153: Running: chmod +x /root/tmp3639031375/log-dump.sh 2023/01/14 01:13:01 process.go:155: Step 'chmod +x /root/tmp3639031375/log-dump.sh' finished in 1.368904ms 2023/01/14 01:13:01 aksengine_helpers.go:425: downloading /root/tmp3639031375/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 75 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/14 01:14:01 process.go:155: Step 'bash -c /root/tmp3639031375/win-ci-logs-collector.sh kubetest-rpwnaldb.westeurope.cloudapp.azure.com /root/tmp3639031375 /root/.ssh/id_rsa' finished in 4.081635ms 2023/01/14 01:14:01 aksengine.go:1141: Deleting resource group: kubetest-rpwnaldb. 2023/01/14 01:21:12 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/14 01:21:12 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/14 01:21:12 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 333.873216ms 2023/01/14 01:21:12 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 6f45c69228e5 ... skipping 4 lines ...