This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-07 20:05
Elapsed49m23s
Revision
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 628 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-kddm5f-kubeconfig; do sleep 1; done"
capz-kddm5f-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-kddm5f-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-kddm5f-control-plane-pds8m   NotReady   control-plane,master   11s   v1.21.15-rc.0.4+2fef630dd216dd
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-kddm5f-control-plane-pds8m condition met
node/capz-kddm5f-mp-0000000 condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:22:03.839: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7jw7g" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  7 20:22:03.867: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 27.889618ms
Sep  7 20:22:05.897: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058341846s
Sep  7 20:22:07.928: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088830515s
Sep  7 20:22:09.958: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118772445s
Sep  7 20:22:11.989: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149720751s
Sep  7 20:22:14.019: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.180348854s
... skipping 3 lines ...
Sep  7 20:22:22.143: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 18.303569661s
Sep  7 20:22:24.179: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 20.339481264s
Sep  7 20:22:26.210: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Pending", Reason="", readiness=false. Elapsed: 22.371140797s
Sep  7 20:22:28.244: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Running", Reason="", readiness=true. Elapsed: 24.405288242s
Sep  7 20:22:30.277: INFO: Pod "azuredisk-volume-tester-7jw7g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.438328768s
STEP: Saw pod success
Sep  7 20:22:30.278: INFO: Pod "azuredisk-volume-tester-7jw7g" satisfied condition "Succeeded or Failed"
Sep  7 20:22:30.278: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-7jw7g"
Sep  7 20:22:30.326: INFO: Pod azuredisk-volume-tester-7jw7g has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-7jw7g in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:22:30.428: INFO: deleting PVC "azuredisk-8081"/"pvc-h6ws8"
Sep  7 20:22:30.428: INFO: Deleting PersistentVolumeClaim "pvc-h6ws8"
STEP: waiting for claim's PV "pvc-104af85c-daca-456c-bc65-0518882990ac" to be deleted
Sep  7 20:22:30.463: INFO: Waiting up to 10m0s for PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac to get deleted
Sep  7 20:22:30.490: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac found and phase=Released (27.481371ms)
Sep  7 20:22:35.519: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac found and phase=Failed (5.05656406s)
Sep  7 20:22:40.550: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac found and phase=Failed (10.087470991s)
Sep  7 20:22:45.581: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac found and phase=Failed (15.11852564s)
Sep  7 20:22:50.616: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac found and phase=Failed (20.153555573s)
Sep  7 20:22:55.649: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac found and phase=Failed (25.185632088s)
Sep  7 20:23:00.678: INFO: PersistentVolume pvc-104af85c-daca-456c-bc65-0518882990ac was removed
Sep  7 20:23:00.679: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  7 20:23:00.709: INFO: Claim "azuredisk-8081" in namespace "pvc-h6ws8" doesn't exist in the system
Sep  7 20:23:00.709: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-rrnmk
Sep  7 20:23:00.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  7 20:23:24.890: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-72pj7"
Sep  7 20:23:24.935: INFO: Error getting logs for pod azuredisk-volume-tester-72pj7: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-72pj7)
STEP: Deleting pod azuredisk-volume-tester-72pj7 in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:23:25.032: INFO: deleting PVC "azuredisk-5466"/"pvc-rcrkd"
Sep  7 20:23:25.032: INFO: Deleting PersistentVolumeClaim "pvc-rcrkd"
STEP: waiting for claim's PV "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" to be deleted
Sep  7 20:23:25.062: INFO: Waiting up to 10m0s for PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 to get deleted
Sep  7 20:23:25.091: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Bound (28.516577ms)
Sep  7 20:23:30.123: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Bound (5.060957961s)
Sep  7 20:23:35.154: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (10.092141372s)
Sep  7 20:23:40.185: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (15.123170267s)
Sep  7 20:23:45.219: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (20.156516549s)
Sep  7 20:23:50.252: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (25.189355809s)
Sep  7 20:23:55.285: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (30.22283082s)
Sep  7 20:24:00.316: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (35.253877914s)
Sep  7 20:24:05.350: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (40.287264119s)
Sep  7 20:24:10.380: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 found and phase=Failed (45.317572812s)
Sep  7 20:24:15.409: INFO: PersistentVolume pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 was removed
Sep  7 20:24:15.409: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  7 20:24:15.439: INFO: Claim "azuredisk-5466" in namespace "pvc-rcrkd" doesn't exist in the system
Sep  7 20:24:15.439: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-flzhb
Sep  7 20:24:15.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:24:16.207: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-p8lt7" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  7 20:24:16.235: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.221227ms
Sep  7 20:24:18.265: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058905701s
Sep  7 20:24:20.297: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090007807s
Sep  7 20:24:22.327: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120843403s
Sep  7 20:24:24.358: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151083981s
Sep  7 20:24:26.388: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181221193s
Sep  7 20:24:28.418: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.211162604s
Sep  7 20:24:30.449: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.242681794s
Sep  7 20:24:32.481: INFO: Pod "azuredisk-volume-tester-p8lt7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.274010339s
STEP: Saw pod success
Sep  7 20:24:32.481: INFO: Pod "azuredisk-volume-tester-p8lt7" satisfied condition "Succeeded or Failed"
Sep  7 20:24:32.481: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-p8lt7"
Sep  7 20:24:32.513: INFO: Pod azuredisk-volume-tester-p8lt7 has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-p8lt7 in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:24:32.612: INFO: deleting PVC "azuredisk-2790"/"pvc-h5d7l"
Sep  7 20:24:32.612: INFO: Deleting PersistentVolumeClaim "pvc-h5d7l"
STEP: waiting for claim's PV "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" to be deleted
Sep  7 20:24:32.643: INFO: Waiting up to 10m0s for PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e to get deleted
Sep  7 20:24:32.670: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Released (27.624574ms)
Sep  7 20:24:37.700: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (5.057830167s)
Sep  7 20:24:42.731: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (10.088140779s)
Sep  7 20:24:47.762: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (15.119357059s)
Sep  7 20:24:52.795: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (20.152691849s)
Sep  7 20:24:57.830: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (25.187020976s)
Sep  7 20:25:02.860: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (30.216984453s)
Sep  7 20:25:07.890: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (35.246885504s)
Sep  7 20:25:12.919: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e found and phase=Failed (40.276538822s)
Sep  7 20:25:17.950: INFO: PersistentVolume pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e was removed
Sep  7 20:25:17.950: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  7 20:25:17.978: INFO: Claim "azuredisk-2790" in namespace "pvc-h5d7l" doesn't exist in the system
Sep  7 20:25:17.978: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-g7blk
Sep  7 20:25:18.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  7 20:25:18.783: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rv55j" in namespace "azuredisk-5356" to be "Error status code"
Sep  7 20:25:18.811: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 27.895496ms
Sep  7 20:25:20.841: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057839316s
Sep  7 20:25:22.871: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0880619s
Sep  7 20:25:24.900: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117046126s
Sep  7 20:25:26.930: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147508422s
Sep  7 20:25:28.963: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179679359s
Sep  7 20:25:30.993: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.210338617s
Sep  7 20:25:33.025: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.242061562s
Sep  7 20:25:35.056: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.272862387s
Sep  7 20:25:37.087: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 18.304047076s
Sep  7 20:25:39.117: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 20.334065239s
Sep  7 20:25:41.150: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Pending", Reason="", readiness=false. Elapsed: 22.367163817s
Sep  7 20:25:43.182: INFO: Pod "azuredisk-volume-tester-rv55j": Phase="Failed", Reason="", readiness=false. Elapsed: 24.398834588s
STEP: Saw pod failure
Sep  7 20:25:43.182: INFO: Pod "azuredisk-volume-tester-rv55j" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 20:25:43.214: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-rv55j"
Sep  7 20:25:43.246: INFO: Pod azuredisk-volume-tester-rv55j has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-rv55j in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:25:43.346: INFO: deleting PVC "azuredisk-5356"/"pvc-w7jkm"
Sep  7 20:25:43.346: INFO: Deleting PersistentVolumeClaim "pvc-w7jkm"
STEP: waiting for claim's PV "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" to be deleted
Sep  7 20:25:43.378: INFO: Waiting up to 10m0s for PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 to get deleted
Sep  7 20:25:43.409: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Released (30.826062ms)
Sep  7 20:25:48.443: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Failed (5.065261161s)
Sep  7 20:25:53.478: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Failed (10.099869471s)
Sep  7 20:25:58.512: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Failed (15.133483249s)
Sep  7 20:26:03.543: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Failed (20.164333815s)
Sep  7 20:26:08.576: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Failed (25.197599189s)
Sep  7 20:26:13.609: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 found and phase=Failed (30.230861761s)
Sep  7 20:26:18.641: INFO: PersistentVolume pvc-97cda27c-8713-4160-be1d-2f60669b4e89 was removed
Sep  7 20:26:18.641: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  7 20:26:18.670: INFO: Claim "azuredisk-5356" in namespace "pvc-w7jkm" doesn't exist in the system
Sep  7 20:26:18.670: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-xnsgq
Sep  7 20:26:18.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 54 lines ...
Sep  7 20:27:28.076: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Bound (10.092400485s)
Sep  7 20:27:33.108: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Bound (15.124763372s)
Sep  7 20:27:38.141: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Bound (20.15776095s)
Sep  7 20:27:43.175: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Bound (25.192065326s)
Sep  7 20:27:48.205: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Bound (30.221601641s)
Sep  7 20:27:53.234: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Bound (35.251139878s)
Sep  7 20:27:58.264: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (40.281007374s)
Sep  7 20:28:03.294: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (45.310336201s)
Sep  7 20:28:08.324: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (50.340787758s)
Sep  7 20:28:13.364: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (55.380353098s)
Sep  7 20:28:18.396: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (1m0.413181376s)
Sep  7 20:28:23.425: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (1m5.442197568s)
Sep  7 20:28:28.455: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 found and phase=Failed (1m10.47210542s)
Sep  7 20:28:33.490: INFO: PersistentVolume pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 was removed
Sep  7 20:28:33.490: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 20:28:33.518: INFO: Claim "azuredisk-5194" in namespace "pvc-vqg9p" doesn't exist in the system
Sep  7 20:28:33.518: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-hvvkg
Sep  7 20:28:33.548: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-lhj99"
Sep  7 20:28:33.587: INFO: Pod azuredisk-volume-tester-lhj99 has the following logs: 
... skipping 10 lines ...
Sep  7 20:28:48.825: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Bound (15.117740045s)
Sep  7 20:28:53.856: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Bound (20.148858598s)
Sep  7 20:28:58.888: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Bound (25.180355235s)
Sep  7 20:29:03.921: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Bound (30.213704978s)
Sep  7 20:29:08.951: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Bound (35.243156555s)
Sep  7 20:29:13.982: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Bound (40.27465431s)
Sep  7 20:29:19.015: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Failed (45.307145229s)
Sep  7 20:29:24.043: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Failed (50.335777615s)
Sep  7 20:29:29.076: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Failed (55.368858132s)
Sep  7 20:29:34.106: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Failed (1m0.398757709s)
Sep  7 20:29:39.139: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f found and phase=Failed (1m5.431337566s)
Sep  7 20:29:44.169: INFO: PersistentVolume pvc-70feb334-5699-4b12-9393-0f7e2055411f was removed
Sep  7 20:29:44.169: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 20:29:44.197: INFO: Claim "azuredisk-5194" in namespace "pvc-c5tvr" doesn't exist in the system
Sep  7 20:29:44.197: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-dv45c
Sep  7 20:29:44.228: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-258kk"
Sep  7 20:29:44.277: INFO: Pod azuredisk-volume-tester-258kk has the following logs: 
... skipping 9 lines ...
Sep  7 20:29:54.492: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Bound (10.093873866s)
Sep  7 20:29:59.524: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Bound (15.125657279s)
Sep  7 20:30:04.557: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Bound (20.158824568s)
Sep  7 20:30:09.590: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Bound (25.192382377s)
Sep  7 20:30:14.620: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Bound (30.22178187s)
Sep  7 20:30:19.650: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Bound (35.251841026s)
Sep  7 20:30:24.679: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (40.281461567s)
Sep  7 20:30:29.710: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (45.31232433s)
Sep  7 20:30:34.740: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (50.341488256s)
Sep  7 20:30:39.770: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (55.372390467s)
Sep  7 20:30:44.802: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (1m0.404346355s)
Sep  7 20:30:49.836: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (1m5.437865091s)
Sep  7 20:30:54.865: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 found and phase=Failed (1m10.467258963s)
Sep  7 20:30:59.898: INFO: PersistentVolume pvc-cab43c89-0996-4b0c-8fe7-d38292953101 was removed
Sep  7 20:30:59.898: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 20:30:59.927: INFO: Claim "azuredisk-5194" in namespace "pvc-bzbds" doesn't exist in the system
Sep  7 20:30:59.927: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-mjds2
Sep  7 20:30:59.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 63 lines ...
Sep  7 20:33:57.597: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Bound (15.123869624s)
Sep  7 20:34:02.628: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Bound (20.155141533s)
Sep  7 20:34:07.661: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Bound (25.18788809s)
Sep  7 20:34:12.694: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Bound (30.220830214s)
Sep  7 20:34:17.723: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Bound (35.249891176s)
Sep  7 20:34:22.755: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Bound (40.282315978s)
Sep  7 20:34:27.788: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Failed (45.314606563s)
Sep  7 20:34:32.818: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Failed (50.344649437s)
Sep  7 20:34:37.846: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Failed (55.373228802s)
Sep  7 20:34:42.876: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 found and phase=Failed (1m0.403045752s)
Sep  7 20:34:47.906: INFO: PersistentVolume pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 was removed
Sep  7 20:34:47.906: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  7 20:34:47.933: INFO: Claim "azuredisk-1353" in namespace "pvc-mdl9v" doesn't exist in the system
Sep  7 20:34:47.934: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-xcv4x
Sep  7 20:34:47.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:35:05.187: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kjdfn" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  7 20:35:05.216: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 28.628354ms
Sep  7 20:35:07.246: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059042744s
Sep  7 20:35:09.277: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090133976s
Sep  7 20:35:11.308: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120943173s
Sep  7 20:35:13.341: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154177502s
Sep  7 20:35:15.380: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.192758238s
... skipping 24 lines ...
Sep  7 20:36:06.147: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.959513373s
Sep  7 20:36:08.178: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.99081145s
Sep  7 20:36:10.212: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.024711771s
Sep  7 20:36:12.243: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.055969888s
Sep  7 20:36:14.274: INFO: Pod "azuredisk-volume-tester-kjdfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m9.086825938s
STEP: Saw pod success
Sep  7 20:36:14.274: INFO: Pod "azuredisk-volume-tester-kjdfn" satisfied condition "Succeeded or Failed"
Sep  7 20:36:14.274: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-kjdfn"
Sep  7 20:36:14.319: INFO: Pod azuredisk-volume-tester-kjdfn has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-kjdfn in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:36:14.411: INFO: deleting PVC "azuredisk-59"/"pvc-rjgxm"
Sep  7 20:36:14.411: INFO: Deleting PersistentVolumeClaim "pvc-rjgxm"
STEP: waiting for claim's PV "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" to be deleted
Sep  7 20:36:14.440: INFO: Waiting up to 10m0s for PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d to get deleted
Sep  7 20:36:14.470: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Bound (30.171683ms)
Sep  7 20:36:19.503: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (5.062993508s)
Sep  7 20:36:24.532: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (10.091572237s)
Sep  7 20:36:29.564: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (15.123727371s)
Sep  7 20:36:34.593: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (20.153141206s)
Sep  7 20:36:39.623: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (25.18252848s)
Sep  7 20:36:44.656: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (30.21602292s)
Sep  7 20:36:49.689: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (35.249269553s)
Sep  7 20:36:54.718: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (40.277926331s)
Sep  7 20:36:59.746: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (45.306424401s)
Sep  7 20:37:04.775: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (50.334989444s)
Sep  7 20:37:09.804: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d found and phase=Failed (55.364067006s)
Sep  7 20:37:14.845: INFO: PersistentVolume pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d was removed
Sep  7 20:37:14.845: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  7 20:37:14.873: INFO: Claim "azuredisk-59" in namespace "pvc-rjgxm" doesn't exist in the system
Sep  7 20:37:14.873: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-szwgk
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:37:36.196: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pkb9w" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  7 20:37:36.224: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 27.723482ms
Sep  7 20:37:38.253: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057589484s
Sep  7 20:37:40.284: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087726474s
Sep  7 20:37:42.316: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119877512s
Sep  7 20:37:44.346: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150671638s
Sep  7 20:37:46.377: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181384539s
... skipping 9 lines ...
Sep  7 20:38:06.689: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 30.493031394s
Sep  7 20:38:08.720: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 32.523966689s
Sep  7 20:38:10.752: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 34.555815424s
Sep  7 20:38:12.782: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 36.586191004s
Sep  7 20:38:14.814: INFO: Pod "azuredisk-volume-tester-pkb9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.618185675s
STEP: Saw pod success
Sep  7 20:38:14.814: INFO: Pod "azuredisk-volume-tester-pkb9w" satisfied condition "Succeeded or Failed"
Sep  7 20:38:14.814: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-pkb9w"
Sep  7 20:38:14.854: INFO: Pod azuredisk-volume-tester-pkb9w has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.072734 seconds, 1.3GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  7 20:38:14.948: INFO: deleting PVC "azuredisk-2546"/"pvc-8fdc7"
Sep  7 20:38:14.948: INFO: Deleting PersistentVolumeClaim "pvc-8fdc7"
STEP: waiting for claim's PV "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" to be deleted
Sep  7 20:38:14.977: INFO: Waiting up to 10m0s for PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 to get deleted
Sep  7 20:38:15.005: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Released (27.218082ms)
Sep  7 20:38:20.034: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (5.056841671s)
Sep  7 20:38:25.067: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (10.089929126s)
Sep  7 20:38:30.099: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (15.12119623s)
Sep  7 20:38:35.128: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (20.150806378s)
Sep  7 20:38:40.159: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (25.181601184s)
Sep  7 20:38:45.190: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (30.212424355s)
Sep  7 20:38:50.222: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (35.244864108s)
Sep  7 20:38:55.256: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 found and phase=Failed (40.278587655s)
Sep  7 20:39:00.285: INFO: PersistentVolume pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 was removed
Sep  7 20:39:00.285: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  7 20:39:00.314: INFO: Claim "azuredisk-2546" in namespace "pvc-8fdc7" doesn't exist in the system
Sep  7 20:39:00.314: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-lzcwk
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:39:12.636: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xjwhp" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  7 20:39:12.666: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 30.305693ms
Sep  7 20:39:14.695: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058953579s
Sep  7 20:39:16.727: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090640085s
Sep  7 20:39:18.757: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121348966s
Sep  7 20:39:20.790: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153539935s
Sep  7 20:39:22.821: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184556233s
... skipping 25 lines ...
Sep  7 20:40:15.644: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.008216599s
Sep  7 20:40:17.680: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.043846542s
Sep  7 20:40:19.710: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.074295545s
Sep  7 20:40:21.742: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.106099836s
Sep  7 20:40:23.772: INFO: Pod "azuredisk-volume-tester-xjwhp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m11.135600308s
STEP: Saw pod success
Sep  7 20:40:23.772: INFO: Pod "azuredisk-volume-tester-xjwhp" satisfied condition "Succeeded or Failed"
Sep  7 20:40:23.772: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-xjwhp"
Sep  7 20:40:23.811: INFO: Pod azuredisk-volume-tester-xjwhp has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-xjwhp in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:40:23.906: INFO: deleting PVC "azuredisk-8582"/"pvc-klggx"
Sep  7 20:40:23.906: INFO: Deleting PersistentVolumeClaim "pvc-klggx"
STEP: waiting for claim's PV "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" to be deleted
Sep  7 20:40:23.935: INFO: Waiting up to 10m0s for PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 to get deleted
Sep  7 20:40:23.963: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Released (27.367506ms)
Sep  7 20:40:28.995: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (5.059246035s)
Sep  7 20:40:34.036: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (10.100186554s)
Sep  7 20:40:39.068: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (15.132158185s)
Sep  7 20:40:44.101: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (20.165504101s)
Sep  7 20:40:49.130: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (25.194609588s)
Sep  7 20:40:54.162: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (30.226820525s)
Sep  7 20:40:59.194: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (35.258655296s)
Sep  7 20:41:04.228: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (40.292034401s)
Sep  7 20:41:09.259: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 found and phase=Failed (45.32348721s)
Sep  7 20:41:14.292: INFO: PersistentVolume pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 was removed
Sep  7 20:41:14.292: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 20:41:14.320: INFO: Claim "azuredisk-8582" in namespace "pvc-klggx" doesn't exist in the system
Sep  7 20:41:14.320: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-58hz7
STEP: validating provisioned PV
STEP: checking the PV
... skipping 10 lines ...
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:41:24.638: INFO: deleting PVC "azuredisk-8582"/"pvc-scm8s"
Sep  7 20:41:24.638: INFO: Deleting PersistentVolumeClaim "pvc-scm8s"
STEP: waiting for claim's PV "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" to be deleted
Sep  7 20:41:24.667: INFO: Waiting up to 10m0s for PersistentVolume pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc to get deleted
Sep  7 20:41:24.697: INFO: PersistentVolume pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc found and phase=Failed (29.469356ms)
Sep  7 20:41:29.726: INFO: PersistentVolume pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc found and phase=Failed (5.058512056s)
Sep  7 20:41:34.757: INFO: PersistentVolume pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc found and phase=Failed (10.089910691s)
Sep  7 20:41:39.787: INFO: PersistentVolume pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc found and phase=Failed (15.119537825s)
Sep  7 20:41:44.817: INFO: PersistentVolume pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc was removed
Sep  7 20:41:44.817: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 20:41:44.844: INFO: Claim "azuredisk-8582" in namespace "pvc-scm8s" doesn't exist in the system
Sep  7 20:41:44.844: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-d4skd
Sep  7 20:41:44.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8582" for this suite.
... skipping 150 lines ...
STEP: checking the PV
Sep  7 20:42:57.595: INFO: deleting PVC "azuredisk-7051"/"pvc-cz9mk"
Sep  7 20:42:57.595: INFO: Deleting PersistentVolumeClaim "pvc-cz9mk"
STEP: waiting for claim's PV "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" to be deleted
Sep  7 20:42:57.624: INFO: Waiting up to 10m0s for PersistentVolume pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 to get deleted
Sep  7 20:42:57.653: INFO: PersistentVolume pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 found and phase=Released (28.690124ms)
Sep  7 20:43:02.685: INFO: PersistentVolume pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 found and phase=Failed (5.060958981s)
Sep  7 20:43:07.718: INFO: PersistentVolume pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 found and phase=Failed (10.094008653s)
Sep  7 20:43:12.749: INFO: PersistentVolume pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 found and phase=Failed (15.125228016s)
Sep  7 20:43:17.785: INFO: PersistentVolume pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 was removed
Sep  7 20:43:17.785: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7051 to be removed
Sep  7 20:43:17.812: INFO: Claim "azuredisk-7051" in namespace "pvc-cz9mk" doesn't exist in the system
Sep  7 20:43:17.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7051" for this suite.

... skipping 218 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:164
STEP: Creating a kubernetes client
Sep  7 20:44:47.859: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.260 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:164

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 248 lines ...
I0907 20:16:41.221928       1 tlsconfig.go:178] loaded client CA [1/"client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt"]: "kubernetes" [] issuer="<self>" (2022-09-07 20:09:41 +0000 UTC to 2032-09-04 20:14:41 +0000 UTC (now=2022-09-07 20:16:41.221917223 +0000 UTC))
I0907 20:16:41.222228       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662581800" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662581800" (2022-09-07 19:16:39 +0000 UTC to 2023-09-07 19:16:39 +0000 UTC (now=2022-09-07 20:16:41.222210727 +0000 UTC))
I0907 20:16:41.222489       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662581801" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662581801" (2022-09-07 19:16:40 +0000 UTC to 2023-09-07 19:16:40 +0000 UTC (now=2022-09-07 20:16:41.22247383 +0000 UTC))
I0907 20:16:41.222546       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0907 20:16:41.222600       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0907 20:16:41.223196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0907 20:16:43.950964       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0907 20:16:43.951239       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0907 20:16:47.951341       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0907 20:16:47.952019       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-kddm5f-control-plane-pds8m_27c3851b-bc56-4f65-a75f-18dcb7cf857f became leader"
I0907 20:16:48.190122       1 request.go:600] Waited for 52.6252ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/admissionregistration.k8s.io/v1?timeout=32s
I0907 20:16:48.233392       1 request.go:600] Waited for 95.885895ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
I0907 20:16:48.284007       1 request.go:600] Waited for 146.485437ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1?timeout=32s
I0907 20:16:48.333514       1 request.go:600] Waited for 195.958604ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
... skipping 40 lines ...
I0907 20:16:48.687997       1 reflector.go:219] Starting reflector *v1.Node (23h57m13.32206641s) from k8s.io/client-go/informers/factory.go:134
I0907 20:16:48.688013       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0907 20:16:48.688159       1 reflector.go:219] Starting reflector *v1.Secret (23h57m13.32206641s) from k8s.io/client-go/informers/factory.go:134
I0907 20:16:48.688295       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0907 20:16:48.688437       1 reflector.go:219] Starting reflector *v1.ServiceAccount (23h57m13.32206641s) from k8s.io/client-go/informers/factory.go:134
I0907 20:16:48.688481       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0907 20:16:48.716737       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0907 20:16:48.716772       1 controllermanager.go:559] Starting "nodeipam"
W0907 20:16:48.716783       1 controllermanager.go:566] Skipping "nodeipam"
I0907 20:16:48.716790       1 controllermanager.go:559] Starting "endpointslice"
I0907 20:16:48.723144       1 controllermanager.go:574] Started "endpointslice"
I0907 20:16:48.723169       1 controllermanager.go:559] Starting "replicaset"
I0907 20:16:48.723478       1 endpointslice_controller.go:256] Starting endpoint slice controller
... skipping 56 lines ...
I0907 20:16:49.094374       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0907 20:16:49.094401       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0907 20:16:49.094413       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0907 20:16:49.094432       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0907 20:16:49.094448       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0907 20:16:49.094458       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0907 20:16:49.094518       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 20:16:49.094534       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0907 20:16:49.094715       1 controllermanager.go:574] Started "attachdetach"
I0907 20:16:49.094734       1 controllermanager.go:559] Starting "pv-protection"
I0907 20:16:49.094789       1 attach_detach_controller.go:328] Starting attach detach controller
I0907 20:16:49.094802       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0907 20:16:49.094866       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-control-plane-pds8m"
W0907 20:16:49.094926       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-kddm5f-control-plane-pds8m" does not exist
I0907 20:16:49.242364       1 controllermanager.go:574] Started "pv-protection"
I0907 20:16:49.242757       1 controllermanager.go:559] Starting "service"
I0907 20:16:49.242722       1 pv_protection_controller.go:83] Starting PV protection controller
I0907 20:16:49.243132       1 shared_informer.go:240] Waiting for caches to sync for PV protection
I0907 20:16:49.392136       1 controllermanager.go:574] Started "service"
I0907 20:16:49.392438       1 controllermanager.go:559] Starting "resourcequota"
... skipping 144 lines ...
I0907 20:16:51.594515       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0907 20:16:51.594522       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0907 20:16:51.594538       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0907 20:16:51.594551       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0907 20:16:51.594561       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0907 20:16:51.594569       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0907 20:16:51.594600       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 20:16:51.594618       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0907 20:16:51.594671       1 controllermanager.go:574] Started "persistentvolume-binder"
I0907 20:16:51.594680       1 controllermanager.go:559] Starting "endpointslicemirroring"
I0907 20:16:51.594731       1 pv_controller_base.go:308] Starting persistent volume controller
I0907 20:16:51.594739       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0907 20:16:51.742862       1 controllermanager.go:574] Started "endpointslicemirroring"
... skipping 376 lines ...
I0907 20:16:53.798834       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/coredns-558bd4d5db", timestamp:time.Time{wall:0xc0be5ced6f9d2540, ext:14087208084, loc:(*time.Location)(0x731ea80)}}
I0907 20:16:53.798990       1 replica_set.go:559] "Too few replicas" replicaSet="kube-system/coredns-558bd4d5db" need=2 creating=2
I0907 20:16:53.803749       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 20:16:53.804220       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 20:16:53.797558167 +0000 UTC m=+14.085938411 - now: 2022-09-07 20:16:53.804211228 +0000 UTC m=+14.092591572]
I0907 20:16:53.808261       1 graph_builder.go:279] garbage controller monitor not yet synced: flowcontrol.apiserver.k8s.io/v1beta1, Resource=flowschemas
I0907 20:16:53.817663       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="472.272226ms"
I0907 20:16:53.817699       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:16:53.817755       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:16:53.817731045 +0000 UTC m=+14.106111389"
I0907 20:16:53.818856       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 20:16:53 +0000 UTC - now: 2022-09-07 20:16:53.818849822 +0000 UTC m=+14.107230066]
I0907 20:16:53.826371       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 20:16:53.826571       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="8.824816ms"
I0907 20:16:53.826613       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:16:53.82659426 +0000 UTC m=+14.114974604"
I0907 20:16:53.827202       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 20:16:53 +0000 UTC - now: 2022-09-07 20:16:53.827195747 +0000 UTC m=+14.115575991]
... skipping 169 lines ...
I0907 20:16:55.753813       1 replica_set.go:439] Pod calico-kube-controllers-969cf87c4-9lk6l updated, objectMeta {Name:calico-kube-controllers-969cf87c4-9lk6l GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:028c7705-cbda-4ae1-9764-7cea8134f188 ResourceVersion:513 Generation:0 CreationTimestamp:2022-09-07 20:16:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:fe770983-18f3-4cfb-87c9-9b4dbafbb574 Controller:0xc001268af7 BlockOwnerDeletion:0xc001268af8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:16:55 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe770983-18f3-4cfb-87c9-9b4dbafbb574\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}}]} -> {Name:calico-kube-controllers-969cf87c4-9lk6l GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:028c7705-cbda-4ae1-9764-7cea8134f188 ResourceVersion:517 Generation:0 CreationTimestamp:2022-09-07 20:16:55 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:fe770983-18f3-4cfb-87c9-9b4dbafbb574 Controller:0xc001269b57 BlockOwnerDeletion:0xc001269b58}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:16:55 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe770983-18f3-4cfb-87c9-9b4dbafbb574\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:16:55 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]}.
I0907 20:16:55.754082       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-9lk6l"
I0907 20:16:55.754388       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-969cf87c4-9lk6l, PodDisruptionBudget controller will avoid syncing.
I0907 20:16:55.754517       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-969cf87c4-9lk6l"
I0907 20:16:55.754776       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-969cf87c4-9lk6l" podUID=028c7705-cbda-4ae1-9764-7cea8134f188
I0907 20:16:55.755175       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="29.629229ms"
I0907 20:16:55.755965       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:16:55.756220       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 20:16:55.756194368 +0000 UTC m=+16.044574612"
I0907 20:16:55.756911       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-07 20:16:55 +0000 UTC - now: 2022-09-07 20:16:55.756904257 +0000 UTC m=+16.045284601]
I0907 20:16:55.757420       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (27.498363ms)
I0907 20:16:55.757624       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0be5cedeb8af439, ext:16018907121, loc:(*time.Location)(0x731ea80)}}
I0907 20:16:55.757874       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0907 20:16:55.762564       1 disruption.go:384] add DB "calico-kube-controllers"
... skipping 381 lines ...
I0907 20:17:27.872551       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5cf5f4006708, ext:48160821852, loc:(*time.Location)(0x731ea80)}}
I0907 20:17:27.872612       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5cf5f402f61d, ext:48160989653, loc:(*time.Location)(0x731ea80)}}
I0907 20:17:27.872622       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0907 20:17:27.872677       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0907 20:17:27.872695       1 daemon_controller.go:1103] Updating daemon set status
I0907 20:17:27.872745       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (4.76852ms)
I0907 20:17:28.318963       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-kddm5f-control-plane-pds8m transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:17:06 +0000 UTC,LastTransitionTime:2022-09-07 20:16:30 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:17:26 +0000 UTC,LastTransitionTime:2022-09-07 20:17:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:17:28.319082       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-control-plane-pds8m ReadyCondition updated. Updating timestamp.
I0907 20:17:28.319149       1 node_lifecycle_controller.go:893] Node capz-kddm5f-control-plane-pds8m is healthy again, removing all taints
I0907 20:17:28.319229       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0907 20:17:31.170098       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="161.302µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:41808" resp=200
I0907 20:17:33.186670       1 disruption.go:427] updatePod called on pod "calico-node-b9g7f"
I0907 20:17:33.186934       1 daemon_controller.go:571] Pod calico-node-b9g7f updated.
... skipping 253 lines ...
I0907 20:18:29.000035       1 certificate_controller.go:173] Finished syncing certificate request "csr-g2rht" (1µs)
I0907 20:18:28.999742       1 certificate_controller.go:87] Updating certificate request csr-g2rht
I0907 20:18:29.000096       1 certificate_controller.go:173] Finished syncing certificate request "csr-g2rht" (700ns)
I0907 20:18:28.999817       1 certificate_controller.go:173] Finished syncing certificate request "csr-g2rht" (1.3µs)
I0907 20:18:31.170751       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="95.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53344" resp=200
I0907 20:18:32.957444       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000000"
W0907 20:18:32.957489       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-kddm5f-mp-0000000" does not exist
I0907 20:18:32.957514       1 controller.go:693] Ignoring node capz-kddm5f-mp-0000000 with Ready condition status False
I0907 20:18:32.957528       1 controller.go:272] Triggering nodeSync
I0907 20:18:32.957538       1 controller.go:291] nodeSync has been triggered
I0907 20:18:32.958170       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 20:18:32.958198       1 controller.go:790] Finished updateLoadBalancerHosts
I0907 20:18:32.958214       1 controller.go:731] It took 5.66e-05 seconds to finish nodeSyncInternal
... skipping 125 lines ...
I0907 20:18:33.856735       1 daemon_controller.go:1030] Pods to delete for daemon set kube-proxy: [], deleting 0
I0907 20:18:33.856830       1 daemon_controller.go:1103] Updating daemon set status
I0907 20:18:33.856897       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/kube-proxy" (1.439608ms)
I0907 20:18:34.262097       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-kddm5f-mp-0000001}
I0907 20:18:34.264803       1 taint_manager.go:440] "Updating known taints on node" node="capz-kddm5f-mp-0000001" taints=[]
I0907 20:18:34.262674       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
W0907 20:18:34.265426       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-kddm5f-mp-0000001" does not exist
I0907 20:18:34.262719       1 controller.go:693] Ignoring node capz-kddm5f-mp-0000000 with Ready condition status False
I0907 20:18:34.264261       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5d06484b5a74, ext:113427536328, loc:(*time.Location)(0x731ea80)}}
I0907 20:18:34.264767       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d06730ebbec, ext:114144983872, loc:(*time.Location)(0x731ea80)}}
I0907 20:18:34.266033       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d068fdb3c43, ext:114554406295, loc:(*time.Location)(0x731ea80)}}
I0907 20:18:34.266189       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-kddm5f-mp-0000001], creating 1
I0907 20:18:34.266820       1 controller.go:693] Ignoring node capz-kddm5f-mp-0000001 with Ready condition status False
... skipping 335 lines ...
I0907 20:18:53.146451       1 daemon_controller.go:1103] Updating daemon set status
I0907 20:18:53.146650       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (3.415723ms)
I0907 20:18:53.157794       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000000"
I0907 20:18:53.159514       1 controller_utils.go:221] Made sure that Node capz-kddm5f-mp-0000000 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0907 20:18:53.167355       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:18:53.183652       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:18:53.333573       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-kddm5f-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:18:43 +0000 UTC,LastTransitionTime:2022-09-07 20:18:32 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:18:53 +0000 UTC,LastTransitionTime:2022-09-07 20:18:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:18:53.333686       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 20:18:53.343950       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000000"
I0907 20:18:53.344112       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-kddm5f-mp-0000000}
I0907 20:18:53.344133       1 taint_manager.go:440] "Updating known taints on node" node="capz-kddm5f-mp-0000000" taints=[]
I0907 20:18:53.344157       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-kddm5f-mp-0000000"
I0907 20:18:53.344579       1 node_lifecycle_controller.go:893] Node capz-kddm5f-mp-0000000 is healthy again, removing all taints
... skipping 11 lines ...
I0907 20:18:54.485690       1 controller.go:790] Finished updateLoadBalancerHosts
I0907 20:18:54.484767       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:18:54.485988       1 controller.go:748] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 20:18:54.486150       1 controller.go:731] It took 0.000723805 seconds to finish nodeSyncInternal
I0907 20:18:54.496573       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:18:54.496852       1 controller_utils.go:221] Made sure that Node capz-kddm5f-mp-0000001 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0907 20:18:58.346368       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-kddm5f-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:18:44 +0000 UTC,LastTransitionTime:2022-09-07 20:18:34 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:18:54 +0000 UTC,LastTransitionTime:2022-09-07 20:18:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:18:58.346479       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:18:58.370370       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:18:58.370502       1 node_lifecycle_controller.go:893] Node capz-kddm5f-mp-0000001 is healthy again, removing all taints
I0907 20:18:58.370948       1 node_lifecycle_controller.go:1214] Controller detected that zone eastus::1 is now in state Normal.
I0907 20:18:58.370734       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-kddm5f-mp-0000001}
I0907 20:18:58.371288       1 taint_manager.go:440] "Updating known taints on node" node="capz-kddm5f-mp-0000001" taints=[]
... skipping 446 lines ...
I0907 20:22:30.470843       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: claim azuredisk-8081/pvc-h6ws8 not found
I0907 20:22:30.470986       1 pv_controller.go:1108] reclaimVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: policy is Delete
I0907 20:22:30.471008       1 pv_controller.go:1753] scheduleOperation[delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]]
I0907 20:22:30.471055       1 pv_controller.go:1764] operation "delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]" is already running, skipping
I0907 20:22:30.472060       1 pv_controller.go:1341] isVolumeReleased[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is released
I0907 20:22:30.472080       1 pv_controller.go:1405] doDeleteVolume [pvc-104af85c-daca-456c-bc65-0518882990ac]
I0907 20:22:30.505874       1 pv_controller.go:1260] deletion of volume "pvc-104af85c-daca-456c-bc65-0518882990ac" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
I0907 20:22:30.505919       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-104af85c-daca-456c-bc65-0518882990ac]: set phase Failed
I0907 20:22:30.505930       1 pv_controller.go:858] updating PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: set phase Failed
I0907 20:22:30.510615       1 pv_protection_controller.go:205] Got event on PV pvc-104af85c-daca-456c-bc65-0518882990ac
I0907 20:22:30.510651       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-104af85c-daca-456c-bc65-0518882990ac" with version 1299
I0907 20:22:30.511135       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: phase: Failed, bound to: "azuredisk-8081/pvc-h6ws8 (uid: 104af85c-daca-456c-bc65-0518882990ac)", boundByController: true
I0907 20:22:30.511551       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is bound to claim azuredisk-8081/pvc-h6ws8
I0907 20:22:30.511574       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: claim azuredisk-8081/pvc-h6ws8 not found
I0907 20:22:30.511584       1 pv_controller.go:1108] reclaimVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: policy is Delete
I0907 20:22:30.511604       1 pv_controller.go:1753] scheduleOperation[delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]]
I0907 20:22:30.511613       1 pv_controller.go:1764] operation "delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]" is already running, skipping
I0907 20:22:30.511495       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-104af85c-daca-456c-bc65-0518882990ac" with version 1299
I0907 20:22:30.511718       1 pv_controller.go:879] volume "pvc-104af85c-daca-456c-bc65-0518882990ac" entered phase "Failed"
I0907 20:22:30.511781       1 pv_controller.go:901] volume "pvc-104af85c-daca-456c-bc65-0518882990ac" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
E0907 20:22:30.511926       1 goroutinemap.go:150] Operation for "delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]" failed. No retries permitted until 2022-09-07 20:22:31.011890212 +0000 UTC m=+351.300270456 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:22:30.512027       1 event.go:291] "Event occurred" object="pvc-104af85c-daca-456c-bc65-0518882990ac" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:22:31.170674       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="93.1µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53528" resp=200
I0907 20:22:32.298905       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 90 items received
I0907 20:22:33.356723       1 gc_controller.go:161] GC'ing orphaned
I0907 20:22:33.356767       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:22:33.389338       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000000"
... skipping 8 lines ...
I0907 20:22:33.526453       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac from node "capz-kddm5f-mp-0000000"
I0907 20:22:33.526743       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac"
I0907 20:22:33.526817       1 azure_controller_vmss.go:175] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac)
I0907 20:22:38.194669       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:22:38.422357       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:22:38.422453       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-104af85c-daca-456c-bc65-0518882990ac" with version 1299
I0907 20:22:38.422506       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: phase: Failed, bound to: "azuredisk-8081/pvc-h6ws8 (uid: 104af85c-daca-456c-bc65-0518882990ac)", boundByController: true
I0907 20:22:38.422558       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is bound to claim azuredisk-8081/pvc-h6ws8
I0907 20:22:38.422584       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: claim azuredisk-8081/pvc-h6ws8 not found
I0907 20:22:38.422596       1 pv_controller.go:1108] reclaimVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: policy is Delete
I0907 20:22:38.422624       1 pv_controller.go:1753] scheduleOperation[delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]]
I0907 20:22:38.422661       1 pv_controller.go:1232] deleteVolumeOperation [pvc-104af85c-daca-456c-bc65-0518882990ac] started
I0907 20:22:38.430670       1 pv_controller.go:1341] isVolumeReleased[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is released
I0907 20:22:38.430698       1 pv_controller.go:1405] doDeleteVolume [pvc-104af85c-daca-456c-bc65-0518882990ac]
I0907 20:22:38.430756       1 pv_controller.go:1260] deletion of volume "pvc-104af85c-daca-456c-bc65-0518882990ac" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) since it's in attaching or detaching state
I0907 20:22:38.430772       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-104af85c-daca-456c-bc65-0518882990ac]: set phase Failed
I0907 20:22:38.430786       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-104af85c-daca-456c-bc65-0518882990ac]: phase Failed already set
E0907 20:22:38.430833       1 goroutinemap.go:150] Operation for "delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]" failed. No retries permitted until 2022-09-07 20:22:39.430796909 +0000 UTC m=+359.719177253 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) since it's in attaching or detaching state"
I0907 20:22:39.185056       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 133 items received
I0907 20:22:41.169905       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="99.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:35210" resp=200
I0907 20:22:43.548424       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 17 items received
I0907 20:22:48.801996       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) returned with <nil>
I0907 20:22:48.802068       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac) succeeded
I0907 20:22:48.802082       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac was detached from node:capz-kddm5f-mp-0000000
... skipping 3 lines ...
I0907 20:22:53.172479       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:22:53.195746       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:22:53.357838       1 gc_controller.go:161] GC'ing orphaned
I0907 20:22:53.358047       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:22:53.422899       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:22:53.423024       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-104af85c-daca-456c-bc65-0518882990ac" with version 1299
I0907 20:22:53.423124       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: phase: Failed, bound to: "azuredisk-8081/pvc-h6ws8 (uid: 104af85c-daca-456c-bc65-0518882990ac)", boundByController: true
I0907 20:22:53.423206       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is bound to claim azuredisk-8081/pvc-h6ws8
I0907 20:22:53.423266       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: claim azuredisk-8081/pvc-h6ws8 not found
I0907 20:22:53.423296       1 pv_controller.go:1108] reclaimVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: policy is Delete
I0907 20:22:53.423327       1 pv_controller.go:1753] scheduleOperation[delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]]
I0907 20:22:53.423538       1 pv_controller.go:1232] deleteVolumeOperation [pvc-104af85c-daca-456c-bc65-0518882990ac] started
I0907 20:22:53.437401       1 pv_controller.go:1341] isVolumeReleased[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is released
... skipping 7 lines ...
I0907 20:22:58.649628       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-104af85c-daca-456c-bc65-0518882990ac
I0907 20:22:58.649679       1 pv_controller.go:1436] volume "pvc-104af85c-daca-456c-bc65-0518882990ac" deleted
I0907 20:22:58.649696       1 pv_controller.go:1284] deleteVolumeOperation [pvc-104af85c-daca-456c-bc65-0518882990ac]: success
I0907 20:22:58.657603       1 pv_protection_controller.go:205] Got event on PV pvc-104af85c-daca-456c-bc65-0518882990ac
I0907 20:22:58.657688       1 pv_protection_controller.go:125] Processing PV pvc-104af85c-daca-456c-bc65-0518882990ac
I0907 20:22:58.658261       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-104af85c-daca-456c-bc65-0518882990ac" with version 1343
I0907 20:22:58.658402       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: phase: Failed, bound to: "azuredisk-8081/pvc-h6ws8 (uid: 104af85c-daca-456c-bc65-0518882990ac)", boundByController: true
I0907 20:22:58.658524       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: volume is bound to claim azuredisk-8081/pvc-h6ws8
I0907 20:22:58.658545       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: claim azuredisk-8081/pvc-h6ws8 not found
I0907 20:22:58.658554       1 pv_controller.go:1108] reclaimVolume[pvc-104af85c-daca-456c-bc65-0518882990ac]: policy is Delete
I0907 20:22:58.658572       1 pv_controller.go:1753] scheduleOperation[delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]]
I0907 20:22:58.658580       1 pv_controller.go:1764] operation "delete-pvc-104af85c-daca-456c-bc65-0518882990ac[c3cadc06-ec52-4d3a-bd3f-2c46f2a4cc1c]" is already running, skipping
I0907 20:22:58.670571       1 pv_controller_base.go:235] volume "pvc-104af85c-daca-456c-bc65-0518882990ac" deleted
... skipping 307 lines ...
I0907 20:23:34.414095       1 pv_controller.go:1108] reclaimVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: policy is Delete
I0907 20:23:34.414109       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]]
I0907 20:23:34.414116       1 pv_controller.go:1764] operation "delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]" is already running, skipping
I0907 20:23:34.414003       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8] started
I0907 20:23:34.415819       1 pv_controller.go:1341] isVolumeReleased[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is released
I0907 20:23:34.415837       1 pv_controller.go:1405] doDeleteVolume [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]
I0907 20:23:34.442322       1 pv_controller.go:1260] deletion of volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:23:34.442351       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: set phase Failed
I0907 20:23:34.442360       1 pv_controller.go:858] updating PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: set phase Failed
I0907 20:23:34.446434       1 pv_protection_controller.go:205] Got event on PV pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8
I0907 20:23:34.446617       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" with version 1463
I0907 20:23:34.446704       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase: Failed, bound to: "azuredisk-5466/pvc-rcrkd (uid: 1ffd5c8a-4ac1-47f0-a098-54821585c8d8)", boundByController: true
I0907 20:23:34.446734       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is bound to claim azuredisk-5466/pvc-rcrkd
I0907 20:23:34.446769       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: claim azuredisk-5466/pvc-rcrkd not found
I0907 20:23:34.446781       1 pv_controller.go:1108] reclaimVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: policy is Delete
I0907 20:23:34.446797       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]]
I0907 20:23:34.446806       1 pv_controller.go:1764] operation "delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]" is already running, skipping
I0907 20:23:34.446904       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" with version 1463
I0907 20:23:34.446929       1 pv_controller.go:879] volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" entered phase "Failed"
I0907 20:23:34.446938       1 pv_controller.go:901] volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
E0907 20:23:34.446994       1 goroutinemap.go:150] Operation for "delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]" failed. No retries permitted until 2022-09-07 20:23:34.946961953 +0000 UTC m=+415.235342197 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:23:34.447247       1 event.go:291] "Event occurred" object="pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:23:38.197868       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:23:38.425145       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:23:38.425231       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" with version 1463
I0907 20:23:38.425465       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase: Failed, bound to: "azuredisk-5466/pvc-rcrkd (uid: 1ffd5c8a-4ac1-47f0-a098-54821585c8d8)", boundByController: true
I0907 20:23:38.425532       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is bound to claim azuredisk-5466/pvc-rcrkd
I0907 20:23:38.425562       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: claim azuredisk-5466/pvc-rcrkd not found
I0907 20:23:38.425577       1 pv_controller.go:1108] reclaimVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: policy is Delete
I0907 20:23:38.425600       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]]
I0907 20:23:38.425653       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8] started
I0907 20:23:38.433747       1 pv_controller.go:1341] isVolumeReleased[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is released
I0907 20:23:38.433770       1 pv_controller.go:1405] doDeleteVolume [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]
I0907 20:23:38.464619       1 pv_controller.go:1260] deletion of volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:23:38.464651       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: set phase Failed
I0907 20:23:38.464662       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase Failed already set
E0907 20:23:38.464729       1 goroutinemap.go:150] Operation for "delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]" failed. No retries permitted until 2022-09-07 20:23:39.464672141 +0000 UTC m=+419.753052485 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:23:39.498997       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0907 20:23:41.169831       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="98.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53498" resp=200
I0907 20:23:42.356535       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 14 items received
I0907 20:23:43.183125       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0907 20:23:44.737395       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:23:44.738242       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8 to the node "capz-kddm5f-mp-0000001" mounted false
... skipping 12 lines ...
I0907 20:23:53.174554       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:23:53.198176       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:23:53.359884       1 gc_controller.go:161] GC'ing orphaned
I0907 20:23:53.359923       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:23:53.425250       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:23:53.425474       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" with version 1463
I0907 20:23:53.425570       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase: Failed, bound to: "azuredisk-5466/pvc-rcrkd (uid: 1ffd5c8a-4ac1-47f0-a098-54821585c8d8)", boundByController: true
I0907 20:23:53.425671       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is bound to claim azuredisk-5466/pvc-rcrkd
I0907 20:23:53.425744       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: claim azuredisk-5466/pvc-rcrkd not found
I0907 20:23:53.425759       1 pv_controller.go:1108] reclaimVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: policy is Delete
I0907 20:23:53.425826       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]]
I0907 20:23:53.425905       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8] started
I0907 20:23:53.439814       1 pv_controller.go:1341] isVolumeReleased[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is released
I0907 20:23:53.439840       1 pv_controller.go:1405] doDeleteVolume [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]
I0907 20:23:53.439909       1 pv_controller.go:1260] deletion of volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) since it's in attaching or detaching state
I0907 20:23:53.439991       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: set phase Failed
I0907 20:23:53.440031       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase Failed already set
E0907 20:23:53.440150       1 goroutinemap.go:150] Operation for "delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]" failed. No retries permitted until 2022-09-07 20:23:55.440116523 +0000 UTC m=+435.728497068 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) since it's in attaching or detaching state"
I0907 20:23:53.754398       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:23:54.169929       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0907 20:23:54.175028       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 529 items received
I0907 20:23:54.213707       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0907 20:23:59.692498       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 101 items received
I0907 20:24:00.135674       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8) returned with <nil>
... skipping 2 lines ...
I0907 20:24:00.136312       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8") on node "capz-kddm5f-mp-0000001" 
I0907 20:24:01.169409       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="96.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54542" resp=200
I0907 20:24:03.202895       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0907 20:24:08.198333       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:24:08.426040       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:24:08.426138       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" with version 1463
I0907 20:24:08.426188       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase: Failed, bound to: "azuredisk-5466/pvc-rcrkd (uid: 1ffd5c8a-4ac1-47f0-a098-54821585c8d8)", boundByController: true
I0907 20:24:08.426243       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is bound to claim azuredisk-5466/pvc-rcrkd
I0907 20:24:08.426269       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: claim azuredisk-5466/pvc-rcrkd not found
I0907 20:24:08.426281       1 pv_controller.go:1108] reclaimVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: policy is Delete
I0907 20:24:08.426307       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]]
I0907 20:24:08.426362       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8] started
I0907 20:24:08.434451       1 pv_controller.go:1341] isVolumeReleased[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is released
... skipping 4 lines ...
I0907 20:24:13.674354       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8
I0907 20:24:13.674400       1 pv_controller.go:1436] volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" deleted
I0907 20:24:13.674415       1 pv_controller.go:1284] deleteVolumeOperation [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: success
I0907 20:24:13.687653       1 pv_protection_controller.go:205] Got event on PV pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8
I0907 20:24:13.687692       1 pv_protection_controller.go:125] Processing PV pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8
I0907 20:24:13.688082       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" with version 1523
I0907 20:24:13.688124       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: phase: Failed, bound to: "azuredisk-5466/pvc-rcrkd (uid: 1ffd5c8a-4ac1-47f0-a098-54821585c8d8)", boundByController: true
I0907 20:24:13.688155       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: volume is bound to claim azuredisk-5466/pvc-rcrkd
I0907 20:24:13.688172       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: claim azuredisk-5466/pvc-rcrkd not found
I0907 20:24:13.688181       1 pv_controller.go:1108] reclaimVolume[pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8]: policy is Delete
I0907 20:24:13.688200       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8[8d4c5393-c087-4403-ac9b-529e052e1cdf]]
I0907 20:24:13.688232       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8] started
I0907 20:24:13.691275       1 pv_controller.go:1244] Volume "pvc-1ffd5c8a-4ac1-47f0-a098-54821585c8d8" is already being deleted
... skipping 244 lines ...
I0907 20:24:32.654103       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: claim azuredisk-2790/pvc-h5d7l not found
I0907 20:24:32.654115       1 pv_controller.go:1108] reclaimVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: policy is Delete
I0907 20:24:32.654131       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]]
I0907 20:24:32.654139       1 pv_controller.go:1764] operation "delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]" is already running, skipping
I0907 20:24:32.655791       1 pv_controller.go:1341] isVolumeReleased[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is released
I0907 20:24:32.655808       1 pv_controller.go:1405] doDeleteVolume [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]
I0907 20:24:32.682627       1 pv_controller.go:1260] deletion of volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:24:32.682663       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: set phase Failed
I0907 20:24:32.682674       1 pv_controller.go:858] updating PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: set phase Failed
I0907 20:24:32.687157       1 pv_protection_controller.go:205] Got event on PV pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e
I0907 20:24:32.687870       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" with version 1607
I0907 20:24:32.688088       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase: Failed, bound to: "azuredisk-2790/pvc-h5d7l (uid: 5f72afb8-f905-43c5-8b90-fa7236cc2a1e)", boundByController: true
I0907 20:24:32.688344       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is bound to claim azuredisk-2790/pvc-h5d7l
I0907 20:24:32.688372       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: claim azuredisk-2790/pvc-h5d7l not found
I0907 20:24:32.688489       1 pv_controller.go:1108] reclaimVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: policy is Delete
I0907 20:24:32.688579       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]]
I0907 20:24:32.688681       1 pv_controller.go:1764] operation "delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]" is already running, skipping
I0907 20:24:32.688975       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" with version 1607
I0907 20:24:32.689002       1 pv_controller.go:879] volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" entered phase "Failed"
I0907 20:24:32.689012       1 pv_controller.go:901] volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
E0907 20:24:32.689120       1 goroutinemap.go:150] Operation for "delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]" failed. No retries permitted until 2022-09-07 20:24:33.189039485 +0000 UTC m=+473.477419829 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:24:32.689466       1 event.go:291] "Event occurred" object="pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:24:33.361416       1 gc_controller.go:161] GC'ing orphaned
I0907 20:24:33.361464       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:24:34.791946       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:24:34.791987       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:24:34.901159       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
... skipping 7 lines ...
I0907 20:24:35.029292       1 azure_controller_vmss.go:175] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e)
I0907 20:24:35.751680       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0907 20:24:37.179691       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 9 items received
I0907 20:24:38.199578       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:24:38.427438       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:24:38.427519       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" with version 1607
I0907 20:24:38.427569       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase: Failed, bound to: "azuredisk-2790/pvc-h5d7l (uid: 5f72afb8-f905-43c5-8b90-fa7236cc2a1e)", boundByController: true
I0907 20:24:38.427612       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is bound to claim azuredisk-2790/pvc-h5d7l
I0907 20:24:38.427636       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: claim azuredisk-2790/pvc-h5d7l not found
I0907 20:24:38.427648       1 pv_controller.go:1108] reclaimVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: policy is Delete
I0907 20:24:38.427669       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]]
I0907 20:24:38.427704       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e] started
I0907 20:24:38.434476       1 pv_controller.go:1341] isVolumeReleased[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is released
I0907 20:24:38.434502       1 pv_controller.go:1405] doDeleteVolume [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]
I0907 20:24:38.434546       1 pv_controller.go:1260] deletion of volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) since it's in attaching or detaching state
I0907 20:24:38.434561       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: set phase Failed
I0907 20:24:38.434571       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase Failed already set
E0907 20:24:38.434611       1 goroutinemap.go:150] Operation for "delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]" failed. No retries permitted until 2022-09-07 20:24:39.43458373 +0000 UTC m=+479.722964074 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) since it's in attaching or detaching state"
I0907 20:24:38.447275       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:24:41.169736       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="104.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:56438" resp=200
I0907 20:24:44.172180       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0907 20:24:51.169248       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="91.3µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48142" resp=200
I0907 20:24:53.176150       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:24:53.200728       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:24:53.361755       1 gc_controller.go:161] GC'ing orphaned
I0907 20:24:53.361807       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:24:53.428409       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:24:53.428695       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" with version 1607
I0907 20:24:53.428833       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase: Failed, bound to: "azuredisk-2790/pvc-h5d7l (uid: 5f72afb8-f905-43c5-8b90-fa7236cc2a1e)", boundByController: true
I0907 20:24:53.428943       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is bound to claim azuredisk-2790/pvc-h5d7l
I0907 20:24:53.429037       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: claim azuredisk-2790/pvc-h5d7l not found
I0907 20:24:53.429057       1 pv_controller.go:1108] reclaimVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: policy is Delete
I0907 20:24:53.429081       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]]
I0907 20:24:53.429132       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e] started
I0907 20:24:53.442942       1 pv_controller.go:1341] isVolumeReleased[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is released
I0907 20:24:53.442964       1 pv_controller.go:1405] doDeleteVolume [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]
I0907 20:24:53.443010       1 pv_controller.go:1260] deletion of volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) since it's in attaching or detaching state
I0907 20:24:53.443024       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: set phase Failed
I0907 20:24:53.443036       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase Failed already set
E0907 20:24:53.443074       1 goroutinemap.go:150] Operation for "delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]" failed. No retries permitted until 2022-09-07 20:24:55.443046104 +0000 UTC m=+495.731426448 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) since it's in attaching or detaching state"
I0907 20:24:53.784983       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:24:53.799737       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0907 20:24:54.600829       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 11 items received
I0907 20:24:55.311498       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) returned with <nil>
I0907 20:24:55.311564       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e) succeeded
I0907 20:24:55.311613       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e was detached from node:capz-kddm5f-mp-0000001
... skipping 3 lines ...
I0907 20:25:01.169227       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.2µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43760" resp=200
I0907 20:25:02.168274       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0907 20:25:07.249627       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:25:08.200828       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:25:08.429591       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:25:08.429679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" with version 1607
I0907 20:25:08.429760       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase: Failed, bound to: "azuredisk-2790/pvc-h5d7l (uid: 5f72afb8-f905-43c5-8b90-fa7236cc2a1e)", boundByController: true
I0907 20:25:08.429807       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is bound to claim azuredisk-2790/pvc-h5d7l
I0907 20:25:08.429830       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: claim azuredisk-2790/pvc-h5d7l not found
I0907 20:25:08.429840       1 pv_controller.go:1108] reclaimVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: policy is Delete
I0907 20:25:08.429863       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]]
I0907 20:25:08.429898       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e] started
I0907 20:25:08.436394       1 pv_controller.go:1341] isVolumeReleased[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is released
... skipping 8 lines ...
I0907 20:25:13.362708       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:25:13.729122       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e
I0907 20:25:13.729168       1 pv_controller.go:1436] volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" deleted
I0907 20:25:13.729183       1 pv_controller.go:1284] deleteVolumeOperation [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: success
I0907 20:25:13.737307       1 pv_protection_controller.go:205] Got event on PV pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e
I0907 20:25:13.737432       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e" with version 1667
I0907 20:25:13.737615       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: phase: Failed, bound to: "azuredisk-2790/pvc-h5d7l (uid: 5f72afb8-f905-43c5-8b90-fa7236cc2a1e)", boundByController: true
I0907 20:25:13.737805       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: volume is bound to claim azuredisk-2790/pvc-h5d7l
I0907 20:25:13.737832       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: claim azuredisk-2790/pvc-h5d7l not found
I0907 20:25:13.737863       1 pv_controller.go:1108] reclaimVolume[pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e]: policy is Delete
I0907 20:25:13.737914       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e[20150d88-bb82-4e48-9aca-9771aeb44d2b]]
I0907 20:25:13.738057       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e] started
I0907 20:25:13.738361       1 pv_protection_controller.go:125] Processing PV pvc-5f72afb8-f905-43c5-8b90-fa7236cc2a1e
... skipping 124 lines ...
I0907 20:25:21.897114       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89" to node "capz-kddm5f-mp-0000001".
I0907 20:25:21.939023       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89" lun 0 to node "capz-kddm5f-mp-0000001".
I0907 20:25:21.939091       1 azure_controller_vmss.go:101] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - attach disk(capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) with DiskEncryptionSetID()
I0907 20:25:22.173322       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
I0907 20:25:23.092366       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0907 20:25:23.134341       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-pzqhp, uid 18853997-8cea-4cd9-9c2a-b7ae3bc63699, event type delete
E0907 20:25:23.162878       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-7stbs" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0907 20:25:23.177257       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:25:23.178967       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-p8lt7.1712adc189e3001e, uid 0d9cbfeb-6a16-48cf-acde-bd8886955260, event type delete
I0907 20:25:23.200003       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-p8lt7.1712adc2cf9a680d, uid 43910cac-f993-4502-8196-eeebd9fbf2ec, event type delete
I0907 20:25:23.201282       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:25:23.204410       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-p8lt7.1712adc3c32db853, uid c03d4415-3095-4fe1-8b9e-213dbeb4968c, event type delete
I0907 20:25:23.208336       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-p8lt7.1712adc3c32e38d7, uid 9530c8ef-b431-4591-854f-1218c122332b, event type delete
... skipping 159 lines ...
I0907 20:25:43.394908       1 pv_controller.go:1108] reclaimVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: policy is Delete
I0907 20:25:43.394971       1 pv_controller.go:1753] scheduleOperation[delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]]
I0907 20:25:43.395018       1 pv_controller.go:1764] operation "delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]" is already running, skipping
I0907 20:25:43.395151       1 pv_controller.go:1232] deleteVolumeOperation [pvc-97cda27c-8713-4160-be1d-2f60669b4e89] started
I0907 20:25:43.397078       1 pv_controller.go:1341] isVolumeReleased[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is released
I0907 20:25:43.397095       1 pv_controller.go:1405] doDeleteVolume [pvc-97cda27c-8713-4160-be1d-2f60669b4e89]
I0907 20:25:43.431084       1 pv_controller.go:1260] deletion of volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:25:43.431124       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: set phase Failed
I0907 20:25:43.431134       1 pv_controller.go:858] updating PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: set phase Failed
I0907 20:25:43.435894       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" with version 1767
I0907 20:25:43.436411       1 pv_controller.go:879] volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" entered phase "Failed"
I0907 20:25:43.436616       1 pv_controller.go:901] volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:25:43.436323       1 pv_protection_controller.go:205] Got event on PV pvc-97cda27c-8713-4160-be1d-2f60669b4e89
I0907 20:25:43.436355       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" with version 1767
I0907 20:25:43.436965       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: phase: Failed, bound to: "azuredisk-5356/pvc-w7jkm (uid: 97cda27c-8713-4160-be1d-2f60669b4e89)", boundByController: true
I0907 20:25:43.437023       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is bound to claim azuredisk-5356/pvc-w7jkm
I0907 20:25:43.437097       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: claim azuredisk-5356/pvc-w7jkm not found
I0907 20:25:43.437108       1 pv_controller.go:1108] reclaimVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: policy is Delete
I0907 20:25:43.437160       1 pv_controller.go:1753] scheduleOperation[delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]]
I0907 20:25:43.437173       1 pv_controller.go:1764] operation "delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]" is already running, skipping
I0907 20:25:43.437555       1 event.go:291] "Event occurred" object="pvc-97cda27c-8713-4160-be1d-2f60669b4e89" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
E0907 20:25:43.437745       1 goroutinemap.go:150] Operation for "delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]" failed. No retries permitted until 2022-09-07 20:25:43.937199297 +0000 UTC m=+544.225579541 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:25:44.869759       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:25:44.869812       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89 to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:25:44.958479       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:25:44.958527       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89 to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:25:44.961523       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-kddm5f-mp-0000001" succeeded. VolumesAttached: []
I0907 20:25:44.961925       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89") on node "capz-kddm5f-mp-0000001" 
... skipping 11 lines ...
I0907 20:25:53.178400       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:25:53.201988       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:25:53.364031       1 gc_controller.go:161] GC'ing orphaned
I0907 20:25:53.364081       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:25:53.431016       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:25:53.431295       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" with version 1767
I0907 20:25:53.431358       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: phase: Failed, bound to: "azuredisk-5356/pvc-w7jkm (uid: 97cda27c-8713-4160-be1d-2f60669b4e89)", boundByController: true
I0907 20:25:53.431412       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is bound to claim azuredisk-5356/pvc-w7jkm
I0907 20:25:53.431435       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: claim azuredisk-5356/pvc-w7jkm not found
I0907 20:25:53.431450       1 pv_controller.go:1108] reclaimVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: policy is Delete
I0907 20:25:53.431474       1 pv_controller.go:1753] scheduleOperation[delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]]
I0907 20:25:53.431530       1 pv_controller.go:1232] deleteVolumeOperation [pvc-97cda27c-8713-4160-be1d-2f60669b4e89] started
I0907 20:25:53.445673       1 pv_controller.go:1341] isVolumeReleased[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is released
I0907 20:25:53.445699       1 pv_controller.go:1405] doDeleteVolume [pvc-97cda27c-8713-4160-be1d-2f60669b4e89]
I0907 20:25:53.445911       1 pv_controller.go:1260] deletion of volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) since it's in attaching or detaching state
I0907 20:25:53.445933       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: set phase Failed
I0907 20:25:53.445965       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: phase Failed already set
E0907 20:25:53.446087       1 goroutinemap.go:150] Operation for "delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]" failed. No retries permitted until 2022-09-07 20:25:54.446054398 +0000 UTC m=+554.734434642 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) since it's in attaching or detaching state"
I0907 20:25:53.819847       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:26:00.382317       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) returned with <nil>
I0907 20:26:00.382389       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89) succeeded
I0907 20:26:00.382403       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89 was detached from node:capz-kddm5f-mp-0000001
I0907 20:26:00.384800       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89") on node "capz-kddm5f-mp-0000001" 
I0907 20:26:01.169601       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="108.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:51660" resp=200
I0907 20:26:02.736608       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 24 items received
I0907 20:26:07.184942       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0907 20:26:08.202521       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:26:08.432091       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:26:08.432181       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" with version 1767
I0907 20:26:08.432402       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: phase: Failed, bound to: "azuredisk-5356/pvc-w7jkm (uid: 97cda27c-8713-4160-be1d-2f60669b4e89)", boundByController: true
I0907 20:26:08.432481       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is bound to claim azuredisk-5356/pvc-w7jkm
I0907 20:26:08.432501       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: claim azuredisk-5356/pvc-w7jkm not found
I0907 20:26:08.432548       1 pv_controller.go:1108] reclaimVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: policy is Delete
I0907 20:26:08.432662       1 pv_controller.go:1753] scheduleOperation[delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]]
I0907 20:26:08.432760       1 pv_controller.go:1232] deleteVolumeOperation [pvc-97cda27c-8713-4160-be1d-2f60669b4e89] started
I0907 20:26:08.439776       1 pv_controller.go:1341] isVolumeReleased[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is released
... skipping 7 lines ...
I0907 20:26:13.658262       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-97cda27c-8713-4160-be1d-2f60669b4e89
I0907 20:26:13.658358       1 pv_controller.go:1436] volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" deleted
I0907 20:26:13.658442       1 pv_controller.go:1284] deleteVolumeOperation [pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: success
I0907 20:26:13.678904       1 pv_protection_controller.go:205] Got event on PV pvc-97cda27c-8713-4160-be1d-2f60669b4e89
I0907 20:26:13.678938       1 pv_protection_controller.go:125] Processing PV pvc-97cda27c-8713-4160-be1d-2f60669b4e89
I0907 20:26:13.679374       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-97cda27c-8713-4160-be1d-2f60669b4e89" with version 1813
I0907 20:26:13.679421       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: phase: Failed, bound to: "azuredisk-5356/pvc-w7jkm (uid: 97cda27c-8713-4160-be1d-2f60669b4e89)", boundByController: true
I0907 20:26:13.679450       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: volume is bound to claim azuredisk-5356/pvc-w7jkm
I0907 20:26:13.679468       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: claim azuredisk-5356/pvc-w7jkm not found
I0907 20:26:13.679477       1 pv_controller.go:1108] reclaimVolume[pvc-97cda27c-8713-4160-be1d-2f60669b4e89]: policy is Delete
I0907 20:26:13.679496       1 pv_controller.go:1753] scheduleOperation[delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]]
I0907 20:26:13.679503       1 pv_controller.go:1764] operation "delete-pvc-97cda27c-8713-4160-be1d-2f60669b4e89[4c742071-ba1b-4546-ba86-0f4efe95b912]" is already running, skipping
I0907 20:26:13.687142       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-97cda27c-8713-4160-be1d-2f60669b4e89
... skipping 910 lines ...
I0907 20:27:54.410522       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: claim azuredisk-5194/pvc-vqg9p not found
I0907 20:27:54.410533       1 pv_controller.go:1108] reclaimVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: policy is Delete
I0907 20:27:54.410548       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]]
I0907 20:27:54.410574       1 pv_controller.go:1764] operation "delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]" is already running, skipping
I0907 20:27:54.412731       1 pv_controller.go:1341] isVolumeReleased[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is released
I0907 20:27:54.412750       1 pv_controller.go:1405] doDeleteVolume [pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]
I0907 20:27:54.446928       1 pv_controller.go:1260] deletion of volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:27:54.447229       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: set phase Failed
I0907 20:27:54.447250       1 pv_controller.go:858] updating PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: set phase Failed
I0907 20:27:54.454587       1 pv_protection_controller.go:205] Got event on PV pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57
I0907 20:27:54.454587       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" with version 2060
I0907 20:27:54.454632       1 pv_controller.go:879] volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" entered phase "Failed"
I0907 20:27:54.454642       1 pv_controller.go:901] volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
E0907 20:27:54.454701       1 goroutinemap.go:150] Operation for "delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]" failed. No retries permitted until 2022-09-07 20:27:54.954669444 +0000 UTC m=+675.243049788 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:27:54.454983       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" with version 2060
I0907 20:27:54.455208       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: phase: Failed, bound to: "azuredisk-5194/pvc-vqg9p (uid: ae657f0b-3a6e-4f65-b9df-88be3326fc57)", boundByController: true
I0907 20:27:54.455419       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is bound to claim azuredisk-5194/pvc-vqg9p
I0907 20:27:54.455573       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: claim azuredisk-5194/pvc-vqg9p not found
I0907 20:27:54.455732       1 pv_controller.go:1108] reclaimVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: policy is Delete
I0907 20:27:54.455863       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]]
I0907 20:27:54.455982       1 pv_controller.go:1766] operation "delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]" postponed due to exponential backoff
I0907 20:27:54.455099       1 event.go:291] "Event occurred" object="pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
... skipping 52 lines ...
I0907 20:28:08.438104       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is bound to claim azuredisk-5194/pvc-c5tvr
I0907 20:28:08.438117       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr found: phase: Bound, bound to: "pvc-70feb334-5699-4b12-9393-0f7e2055411f", bindCompleted: true, boundByController: true
I0907 20:28:08.438135       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: all is bound
I0907 20:28:08.438142       1 pv_controller.go:858] updating PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: set phase Bound
I0907 20:28:08.438151       1 pv_controller.go:861] updating PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase Bound already set
I0907 20:28:08.438164       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" with version 2060
I0907 20:28:08.438191       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: phase: Failed, bound to: "azuredisk-5194/pvc-vqg9p (uid: ae657f0b-3a6e-4f65-b9df-88be3326fc57)", boundByController: true
I0907 20:28:08.438209       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is bound to claim azuredisk-5194/pvc-vqg9p
I0907 20:28:08.438220       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: claim azuredisk-5194/pvc-vqg9p not found
I0907 20:28:08.438226       1 pv_controller.go:1108] reclaimVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: policy is Delete
I0907 20:28:08.438241       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]]
I0907 20:28:08.438259       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 1840
I0907 20:28:08.438274       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Bound, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
... skipping 2 lines ...
I0907 20:28:08.438302       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: all is bound
I0907 20:28:08.438306       1 pv_controller.go:858] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: set phase Bound
I0907 20:28:08.438311       1 pv_controller.go:861] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase Bound already set
I0907 20:28:08.438336       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57] started
I0907 20:28:08.444723       1 pv_controller.go:1341] isVolumeReleased[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is released
I0907 20:28:08.444744       1 pv_controller.go:1405] doDeleteVolume [pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]
I0907 20:28:08.444822       1 pv_controller.go:1260] deletion of volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) since it's in attaching or detaching state
I0907 20:28:08.444840       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: set phase Failed
I0907 20:28:08.444872       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: phase Failed already set
E0907 20:28:08.444926       1 goroutinemap.go:150] Operation for "delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]" failed. No retries permitted until 2022-09-07 20:28:09.444889243 +0000 UTC m=+689.733269587 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) since it's in attaching or detaching state"
I0907 20:28:10.302944       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) returned with <nil>
I0907 20:28:10.303013       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57) succeeded
I0907 20:28:10.303025       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57 was detached from node:capz-kddm5f-mp-0000001
I0907 20:28:10.303235       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57") on node "capz-kddm5f-mp-0000001" 
I0907 20:28:11.169110       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="84.3µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:50412" resp=200
I0907 20:28:13.369260       1 gc_controller.go:161] GC'ing orphaned
... skipping 46 lines ...
I0907 20:28:23.438819       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is bound to claim azuredisk-5194/pvc-c5tvr
I0907 20:28:23.438832       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr found: phase: Bound, bound to: "pvc-70feb334-5699-4b12-9393-0f7e2055411f", bindCompleted: true, boundByController: true
I0907 20:28:23.438845       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: all is bound
I0907 20:28:23.438852       1 pv_controller.go:858] updating PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: set phase Bound
I0907 20:28:23.438876       1 pv_controller.go:861] updating PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase Bound already set
I0907 20:28:23.438894       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" with version 2060
I0907 20:28:23.438911       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: phase: Failed, bound to: "azuredisk-5194/pvc-vqg9p (uid: ae657f0b-3a6e-4f65-b9df-88be3326fc57)", boundByController: true
I0907 20:28:23.438932       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is bound to claim azuredisk-5194/pvc-vqg9p
I0907 20:28:23.438950       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: claim azuredisk-5194/pvc-vqg9p not found
I0907 20:28:23.438958       1 pv_controller.go:1108] reclaimVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: policy is Delete
I0907 20:28:23.438980       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]]
I0907 20:28:23.439016       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57] started
I0907 20:28:23.451388       1 pv_controller.go:1341] isVolumeReleased[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is released
... skipping 2 lines ...
I0907 20:28:28.769074       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57
I0907 20:28:28.769125       1 pv_controller.go:1436] volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" deleted
I0907 20:28:28.769142       1 pv_controller.go:1284] deleteVolumeOperation [pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: success
I0907 20:28:28.781313       1 pv_protection_controller.go:205] Got event on PV pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57
I0907 20:28:28.781360       1 pv_protection_controller.go:125] Processing PV pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57
I0907 20:28:28.782099       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" with version 2113
I0907 20:28:28.782179       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: phase: Failed, bound to: "azuredisk-5194/pvc-vqg9p (uid: ae657f0b-3a6e-4f65-b9df-88be3326fc57)", boundByController: true
I0907 20:28:28.782275       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: volume is bound to claim azuredisk-5194/pvc-vqg9p
I0907 20:28:28.782326       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: claim azuredisk-5194/pvc-vqg9p not found
I0907 20:28:28.782390       1 pv_controller.go:1108] reclaimVolume[pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57]: policy is Delete
I0907 20:28:28.782456       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57[f56e79c6-a956-4b3b-8bc6-e451bb21b922]]
I0907 20:28:28.782531       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57] started
I0907 20:28:28.787401       1 pv_controller.go:1244] Volume "pvc-ae657f0b-3a6e-4f65-b9df-88be3326fc57" is already being deleted
... skipping 251 lines ...
I0907 20:29:14.416125       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr not found
I0907 20:29:14.416137       1 pv_controller.go:1108] reclaimVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: policy is Delete
I0907 20:29:14.416150       1 pv_controller.go:1753] scheduleOperation[delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]]
I0907 20:29:14.416158       1 pv_controller.go:1764] operation "delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]" is already running, skipping
I0907 20:29:14.420635       1 pv_controller.go:1341] isVolumeReleased[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is released
I0907 20:29:14.420652       1 pv_controller.go:1405] doDeleteVolume [pvc-70feb334-5699-4b12-9393-0f7e2055411f]
I0907 20:29:14.491868       1 pv_controller.go:1260] deletion of volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:29:14.491983       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: set phase Failed
I0907 20:29:14.492058       1 pv_controller.go:858] updating PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: set phase Failed
I0907 20:29:14.500716       1 pv_protection_controller.go:205] Got event on PV pvc-70feb334-5699-4b12-9393-0f7e2055411f
I0907 20:29:14.500972       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" with version 2193
I0907 20:29:14.501282       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase: Failed, bound to: "azuredisk-5194/pvc-c5tvr (uid: 70feb334-5699-4b12-9393-0f7e2055411f)", boundByController: true
I0907 20:29:14.501442       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is bound to claim azuredisk-5194/pvc-c5tvr
I0907 20:29:14.501534       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr not found
I0907 20:29:14.501549       1 pv_controller.go:1108] reclaimVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: policy is Delete
I0907 20:29:14.501588       1 pv_controller.go:1753] scheduleOperation[delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]]
I0907 20:29:14.501597       1 pv_controller.go:1764] operation "delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]" is already running, skipping
I0907 20:29:14.501208       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" with version 2193
I0907 20:29:14.501623       1 pv_controller.go:879] volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" entered phase "Failed"
I0907 20:29:14.501968       1 pv_controller.go:901] volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
E0907 20:29:14.502052       1 goroutinemap.go:150] Operation for "delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]" failed. No retries permitted until 2022-09-07 20:29:15.002010517 +0000 UTC m=+755.290390861 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:29:14.502266       1 event.go:291] "Event occurred" object="pvc-70feb334-5699-4b12-9393-0f7e2055411f" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:29:15.067129       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:29:15.067184       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:29:15.130145       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:29:15.130187       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:29:15.130983       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-kddm5f-mp-0000001" succeeded. VolumesAttached: []
... skipping 23 lines ...
I0907 20:29:23.441833       1 pv_controller.go:861] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase Bound already set
I0907 20:29:23.441854       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" with version 2193
I0907 20:29:23.441682       1 pv_controller.go:922] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: already bound to "azuredisk-5194/pvc-bzbds"
I0907 20:29:23.441926       1 pv_controller.go:858] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: set phase Bound
I0907 20:29:23.442009       1 pv_controller.go:861] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase Bound already set
I0907 20:29:23.442042       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-bzbds]: binding to "pvc-cab43c89-0996-4b0c-8fe7-d38292953101"
I0907 20:29:23.442030       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase: Failed, bound to: "azuredisk-5194/pvc-c5tvr (uid: 70feb334-5699-4b12-9393-0f7e2055411f)", boundByController: true
I0907 20:29:23.442143       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is bound to claim azuredisk-5194/pvc-c5tvr
I0907 20:29:23.442156       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr not found
I0907 20:29:23.442174       1 pv_controller.go:1108] reclaimVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: policy is Delete
I0907 20:29:23.442109       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5194/pvc-bzbds]: already bound to "pvc-cab43c89-0996-4b0c-8fe7-d38292953101"
I0907 20:29:23.442194       1 pv_controller.go:1753] scheduleOperation[delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]]
I0907 20:29:23.442202       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-bzbds] status: set phase Bound
I0907 20:29:23.442228       1 pv_controller.go:1232] deleteVolumeOperation [pvc-70feb334-5699-4b12-9393-0f7e2055411f] started
I0907 20:29:23.442229       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-bzbds] status: phase Bound already set
I0907 20:29:23.442319       1 pv_controller.go:1038] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" bound to claim "azuredisk-5194/pvc-bzbds"
I0907 20:29:23.442338       1 pv_controller.go:1039] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:29:23.442352       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-bzbds" status after binding: phase: Bound, bound to: "pvc-cab43c89-0996-4b0c-8fe7-d38292953101", bindCompleted: true, boundByController: true
I0907 20:29:23.454440       1 pv_controller.go:1341] isVolumeReleased[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is released
I0907 20:29:23.454459       1 pv_controller.go:1405] doDeleteVolume [pvc-70feb334-5699-4b12-9393-0f7e2055411f]
I0907 20:29:23.454509       1 pv_controller.go:1260] deletion of volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) since it's in attaching or detaching state
I0907 20:29:23.454522       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: set phase Failed
I0907 20:29:23.454530       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase Failed already set
E0907 20:29:23.454568       1 goroutinemap.go:150] Operation for "delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]" failed. No retries permitted until 2022-09-07 20:29:24.454541361 +0000 UTC m=+764.742921705 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) since it's in attaching or detaching state"
I0907 20:29:24.008954       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:29:30.491437       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) returned with <nil>
I0907 20:29:30.491512       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f) succeeded
I0907 20:29:30.491527       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f was detached from node:capz-kddm5f-mp-0000001
I0907 20:29:30.491559       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f") on node "capz-kddm5f-mp-0000001" 
I0907 20:29:31.169725       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="104.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:59222" resp=200
... skipping 16 lines ...
I0907 20:29:38.441870       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-bzbds] status: set phase Bound
I0907 20:29:38.441978       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-bzbds] status: phase Bound already set
I0907 20:29:38.442034       1 pv_controller.go:1038] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" bound to claim "azuredisk-5194/pvc-bzbds"
I0907 20:29:38.442063       1 pv_controller.go:1039] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:29:38.442117       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-bzbds" status after binding: phase: Bound, bound to: "pvc-cab43c89-0996-4b0c-8fe7-d38292953101", bindCompleted: true, boundByController: true
I0907 20:29:38.442168       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" with version 2193
I0907 20:29:38.442236       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase: Failed, bound to: "azuredisk-5194/pvc-c5tvr (uid: 70feb334-5699-4b12-9393-0f7e2055411f)", boundByController: true
I0907 20:29:38.442308       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is bound to claim azuredisk-5194/pvc-c5tvr
I0907 20:29:38.442336       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr not found
I0907 20:29:38.442381       1 pv_controller.go:1108] reclaimVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: policy is Delete
I0907 20:29:38.442422       1 pv_controller.go:1753] scheduleOperation[delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]]
I0907 20:29:38.442492       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 1840
I0907 20:29:38.442522       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Bound, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
... skipping 11 lines ...
I0907 20:29:43.739118       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-70feb334-5699-4b12-9393-0f7e2055411f
I0907 20:29:43.739165       1 pv_controller.go:1436] volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" deleted
I0907 20:29:43.739180       1 pv_controller.go:1284] deleteVolumeOperation [pvc-70feb334-5699-4b12-9393-0f7e2055411f]: success
I0907 20:29:43.752952       1 pv_protection_controller.go:205] Got event on PV pvc-70feb334-5699-4b12-9393-0f7e2055411f
I0907 20:29:43.752992       1 pv_protection_controller.go:125] Processing PV pvc-70feb334-5699-4b12-9393-0f7e2055411f
I0907 20:29:43.753310       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" with version 2237
I0907 20:29:43.753349       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: phase: Failed, bound to: "azuredisk-5194/pvc-c5tvr (uid: 70feb334-5699-4b12-9393-0f7e2055411f)", boundByController: true
I0907 20:29:43.753372       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: volume is bound to claim azuredisk-5194/pvc-c5tvr
I0907 20:29:43.753387       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: claim azuredisk-5194/pvc-c5tvr not found
I0907 20:29:43.753397       1 pv_controller.go:1108] reclaimVolume[pvc-70feb334-5699-4b12-9393-0f7e2055411f]: policy is Delete
I0907 20:29:43.753417       1 pv_controller.go:1753] scheduleOperation[delete-pvc-70feb334-5699-4b12-9393-0f7e2055411f[065f845e-3ed1-4687-8143-f6636a48d9b9]]
I0907 20:29:43.753438       1 pv_controller.go:1232] deleteVolumeOperation [pvc-70feb334-5699-4b12-9393-0f7e2055411f] started
I0907 20:29:43.758059       1 pv_controller.go:1244] Volume "pvc-70feb334-5699-4b12-9393-0f7e2055411f" is already being deleted
... skipping 155 lines ...
I0907 20:30:23.128246       1 pv_controller.go:1753] scheduleOperation[delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]]
I0907 20:30:23.128257       1 pv_controller.go:1764] operation "delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]" is already running, skipping
I0907 20:30:23.128287       1 pv_controller.go:1232] deleteVolumeOperation [pvc-cab43c89-0996-4b0c-8fe7-d38292953101] started
I0907 20:30:23.127927       1 pv_protection_controller.go:205] Got event on PV pvc-cab43c89-0996-4b0c-8fe7-d38292953101
I0907 20:30:23.131418       1 pv_controller.go:1341] isVolumeReleased[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is released
I0907 20:30:23.131688       1 pv_controller.go:1405] doDeleteVolume [pvc-cab43c89-0996-4b0c-8fe7-d38292953101]
I0907 20:30:23.161798       1 pv_controller.go:1260] deletion of volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
I0907 20:30:23.161830       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: set phase Failed
I0907 20:30:23.161843       1 pv_controller.go:858] updating PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: set phase Failed
I0907 20:30:23.168630       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 2307
I0907 20:30:23.168682       1 pv_controller.go:879] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" entered phase "Failed"
I0907 20:30:23.168695       1 pv_controller.go:901] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
E0907 20:30:23.168756       1 goroutinemap.go:150] Operation for "delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]" failed. No retries permitted until 2022-09-07 20:30:23.668722882 +0000 UTC m=+823.957103126 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:30:23.169101       1 event.go:291] "Event occurred" object="pvc-cab43c89-0996-4b0c-8fe7-d38292953101" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:30:23.169284       1 pv_protection_controller.go:205] Got event on PV pvc-cab43c89-0996-4b0c-8fe7-d38292953101
I0907 20:30:23.169321       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 2307
I0907 20:30:23.169349       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Failed, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:30:23.169381       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is bound to claim azuredisk-5194/pvc-bzbds
I0907 20:30:23.169400       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: claim azuredisk-5194/pvc-bzbds not found
I0907 20:30:23.169411       1 pv_controller.go:1108] reclaimVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: policy is Delete
I0907 20:30:23.169431       1 pv_controller.go:1753] scheduleOperation[delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]]
I0907 20:30:23.169443       1 pv_controller.go:1766] operation "delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]" postponed due to exponential backoff
I0907 20:30:23.184870       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:23.214657       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:23.444246       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:30:23.444596       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 2307
I0907 20:30:23.444655       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Failed, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:30:23.444709       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is bound to claim azuredisk-5194/pvc-bzbds
I0907 20:30:23.444732       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: claim azuredisk-5194/pvc-bzbds not found
I0907 20:30:23.444746       1 pv_controller.go:1108] reclaimVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: policy is Delete
I0907 20:30:23.444771       1 pv_controller.go:1753] scheduleOperation[delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]]
I0907 20:30:23.444783       1 pv_controller.go:1766] operation "delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]" postponed due to exponential backoff
I0907 20:30:23.857222       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000000"
... skipping 12 lines ...
I0907 20:30:31.169772       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="91.1µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:42074" resp=200
I0907 20:30:33.374138       1 gc_controller.go:161] GC'ing orphaned
I0907 20:30:33.374285       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:30:38.215795       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:38.445051       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:30:38.445290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 2307
I0907 20:30:38.445404       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Failed, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:30:38.445452       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is bound to claim azuredisk-5194/pvc-bzbds
I0907 20:30:38.445476       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: claim azuredisk-5194/pvc-bzbds not found
I0907 20:30:38.445490       1 pv_controller.go:1108] reclaimVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: policy is Delete
I0907 20:30:38.445512       1 pv_controller.go:1753] scheduleOperation[delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]]
I0907 20:30:38.445547       1 pv_controller.go:1232] deleteVolumeOperation [pvc-cab43c89-0996-4b0c-8fe7-d38292953101] started
I0907 20:30:38.449621       1 pv_controller.go:1341] isVolumeReleased[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is released
I0907 20:30:38.449642       1 pv_controller.go:1405] doDeleteVolume [pvc-cab43c89-0996-4b0c-8fe7-d38292953101]
I0907 20:30:38.449823       1 pv_controller.go:1260] deletion of volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) since it's in attaching or detaching state
I0907 20:30:38.449844       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: set phase Failed
I0907 20:30:38.449855       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase Failed already set
E0907 20:30:38.449982       1 goroutinemap.go:150] Operation for "delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]" failed. No retries permitted until 2022-09-07 20:30:39.44993191 +0000 UTC m=+839.738312154 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) since it's in attaching or detaching state"
I0907 20:30:39.208647       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) returned with <nil>
I0907 20:30:39.208726       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101) succeeded
I0907 20:30:39.208742       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101 was detached from node:capz-kddm5f-mp-0000000
I0907 20:30:39.208769       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101") on node "capz-kddm5f-mp-0000000" 
I0907 20:30:41.169210       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="84.201µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:47376" resp=200
I0907 20:30:41.175678       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
... skipping 5 lines ...
I0907 20:30:53.186458       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0907 20:30:53.216048       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:53.375342       1 gc_controller.go:161] GC'ing orphaned
I0907 20:30:53.375392       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:30:53.445581       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:30:53.446374       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 2307
I0907 20:30:53.446503       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Failed, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:30:53.446553       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is bound to claim azuredisk-5194/pvc-bzbds
I0907 20:30:53.446581       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: claim azuredisk-5194/pvc-bzbds not found
I0907 20:30:53.446593       1 pv_controller.go:1108] reclaimVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: policy is Delete
I0907 20:30:53.446617       1 pv_controller.go:1753] scheduleOperation[delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]]
I0907 20:30:53.446663       1 pv_controller.go:1232] deleteVolumeOperation [pvc-cab43c89-0996-4b0c-8fe7-d38292953101] started
I0907 20:30:53.458633       1 pv_controller.go:1341] isVolumeReleased[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is released
... skipping 2 lines ...
I0907 20:30:58.707080       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-cab43c89-0996-4b0c-8fe7-d38292953101
I0907 20:30:58.707129       1 pv_controller.go:1436] volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" deleted
I0907 20:30:58.707141       1 pv_controller.go:1284] deleteVolumeOperation [pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: success
I0907 20:30:58.713744       1 pv_protection_controller.go:205] Got event on PV pvc-cab43c89-0996-4b0c-8fe7-d38292953101
I0907 20:30:58.713790       1 pv_protection_controller.go:125] Processing PV pvc-cab43c89-0996-4b0c-8fe7-d38292953101
I0907 20:30:58.714107       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" with version 2362
I0907 20:30:58.714183       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: phase: Failed, bound to: "azuredisk-5194/pvc-bzbds (uid: cab43c89-0996-4b0c-8fe7-d38292953101)", boundByController: true
I0907 20:30:58.714322       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: volume is bound to claim azuredisk-5194/pvc-bzbds
I0907 20:30:58.714351       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: claim azuredisk-5194/pvc-bzbds not found
I0907 20:30:58.714361       1 pv_controller.go:1108] reclaimVolume[pvc-cab43c89-0996-4b0c-8fe7-d38292953101]: policy is Delete
I0907 20:30:58.714406       1 pv_controller.go:1753] scheduleOperation[delete-pvc-cab43c89-0996-4b0c-8fe7-d38292953101[24b8b8fe-24bf-4aae-93ff-3d185f3326fe]]
I0907 20:30:58.714582       1 pv_controller.go:1232] deleteVolumeOperation [pvc-cab43c89-0996-4b0c-8fe7-d38292953101] started
I0907 20:30:58.719124       1 pv_controller.go:1244] Volume "pvc-cab43c89-0996-4b0c-8fe7-d38292953101" is already being deleted
... skipping 44 lines ...
I0907 20:31:00.622854       1 pv_controller.go:1501] provisionClaimOperation [azuredisk-1353/pvc-mdl9v]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0907 20:31:00.623126       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-mdl9v"
I0907 20:31:00.626555       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd" (28.428786ms)
I0907 20:31:00.626773       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd", timestamp:time.Time{wall:0xc0be5dc123a89318, ext:860886630508, loc:(*time.Location)(0x731ea80)}}
I0907 20:31:00.627033       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0907 20:31:00.627426       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-kcxqr" duration="35.775633ms"
I0907 20:31:00.627585       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-kcxqr" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-kcxqr\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:31:00.627761       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-kcxqr" startTime="2022-09-07 20:31:00.627731256 +0000 UTC m=+860.916111500"
I0907 20:31:00.628370       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-kcxqr" timed out (false) [last progress check: 2022-09-07 20:31:00 +0000 UTC - now: 2022-09-07 20:31:00.628362661 +0000 UTC m=+860.916742905]
I0907 20:31:00.632628       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-mdl9v" with version 2385
I0907 20:31:00.632907       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-mdl9v]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0907 20:31:00.633413       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-mdl9v]: no volume found
I0907 20:31:00.633698       1 pv_controller.go:1446] provisionClaim[azuredisk-1353/pvc-mdl9v]: started
... skipping 273 lines ...
I0907 20:31:23.558592       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-kcxqr-7c65c875bd-4pnwp"
I0907 20:31:23.558649       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-kcxqr-7c65c875bd-4pnwp, PodDisruptionBudget controller will avoid syncing.
I0907 20:31:23.558675       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-kcxqr-7c65c875bd-4pnwp"
I0907 20:31:23.558959       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd", timestamp:time.Time{wall:0xc0be5dc6ddd45f75, ext:883788837677, loc:(*time.Location)(0x731ea80)}}
I0907 20:31:23.559032       1 controller_utils.go:972] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd-wtlnr in state Running, deletion time 2022-09-07 20:31:53 +0000 UTC
I0907 20:31:23.559109       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd" (1.114907ms)
W0907 20:31:23.565156       1 reconciler.go:385] Multi-Attach error for volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9") from node "capz-kddm5f-mp-0000001" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd-wtlnr on node capz-kddm5f-mp-0000000
I0907 20:31:23.565366       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-kcxqr-7c65c875bd-4pnwp" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9\" Volume is already used by pod(s) azuredisk-volume-tester-kcxqr-7c65c875bd-wtlnr"
I0907 20:31:24.078776       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:31:25.172191       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:31:28.521747       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:31:29.550091       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 0 items received
I0907 20:31:31.170122       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="88.9µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:56492" resp=200
I0907 20:31:33.377066       1 gc_controller.go:161] GC'ing orphaned
... skipping 599 lines ...
I0907 20:34:24.412867       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: claim azuredisk-1353/pvc-mdl9v not found
I0907 20:34:24.412883       1 pv_controller.go:1108] reclaimVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: policy is Delete
I0907 20:34:24.412900       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]]
I0907 20:34:24.412907       1 pv_controller.go:1764] operation "delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]" is already running, skipping
I0907 20:34:24.414396       1 pv_controller.go:1341] isVolumeReleased[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: volume is released
I0907 20:34:24.414413       1 pv_controller.go:1405] doDeleteVolume [pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]
I0907 20:34:24.414457       1 pv_controller.go:1260] deletion of volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9) since it's in attaching or detaching state
I0907 20:34:24.414594       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: set phase Failed
I0907 20:34:24.414609       1 pv_controller.go:858] updating PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: set phase Failed
I0907 20:34:24.417543       1 pv_protection_controller.go:205] Got event on PV pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9
I0907 20:34:24.417722       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" with version 2779
I0907 20:34:24.418024       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: phase: Failed, bound to: "azuredisk-1353/pvc-mdl9v (uid: ae170131-36ab-443b-89b0-6ab8bb3f6bb9)", boundByController: true
I0907 20:34:24.418058       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: volume is bound to claim azuredisk-1353/pvc-mdl9v
I0907 20:34:24.418112       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: claim azuredisk-1353/pvc-mdl9v not found
I0907 20:34:24.418128       1 pv_controller.go:1108] reclaimVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: policy is Delete
I0907 20:34:24.418143       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]]
I0907 20:34:24.418159       1 pv_controller.go:1764] operation "delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]" is already running, skipping
I0907 20:34:24.417888       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" with version 2779
I0907 20:34:24.418183       1 pv_controller.go:879] volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" entered phase "Failed"
I0907 20:34:24.418197       1 pv_controller.go:901] volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9) since it's in attaching or detaching state
E0907 20:34:24.418249       1 goroutinemap.go:150] Operation for "delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]" failed. No retries permitted until 2022-09-07 20:34:24.918220524 +0000 UTC m=+1065.206600768 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9) since it's in attaching or detaching state"
I0907 20:34:24.418510       1 event.go:291] "Event occurred" object="pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9) since it's in attaching or detaching state"
I0907 20:34:25.668310       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9) returned with <nil>
I0907 20:34:25.668383       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9) succeeded
I0907 20:34:25.668395       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9 was detached from node:capz-kddm5f-mp-0000001
I0907 20:34:25.668423       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9") on node "capz-kddm5f-mp-0000001" 
I0907 20:34:27.485822       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-2hmyo9" (9.9µs)
I0907 20:34:31.169145       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="95.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:49740" resp=200
I0907 20:34:33.388849       1 gc_controller.go:161] GC'ing orphaned
I0907 20:34:33.388889       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:34:34.252279       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:34:38.224999       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:34:38.453079       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:34:38.453160       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" with version 2779
I0907 20:34:38.453205       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: phase: Failed, bound to: "azuredisk-1353/pvc-mdl9v (uid: ae170131-36ab-443b-89b0-6ab8bb3f6bb9)", boundByController: true
I0907 20:34:38.453287       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: volume is bound to claim azuredisk-1353/pvc-mdl9v
I0907 20:34:38.453312       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: claim azuredisk-1353/pvc-mdl9v not found
I0907 20:34:38.453322       1 pv_controller.go:1108] reclaimVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: policy is Delete
I0907 20:34:38.453365       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]]
I0907 20:34:38.453407       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9] started
I0907 20:34:38.461527       1 pv_controller.go:1341] isVolumeReleased[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: volume is released
I0907 20:34:38.461550       1 pv_controller.go:1405] doDeleteVolume [pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]
I0907 20:34:41.169892       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="91.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:60142" resp=200
I0907 20:34:43.738349       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9
I0907 20:34:43.738414       1 pv_controller.go:1436] volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" deleted
I0907 20:34:43.738433       1 pv_controller.go:1284] deleteVolumeOperation [pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: success
I0907 20:34:43.750644       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9" with version 2808
I0907 20:34:43.750714       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: phase: Failed, bound to: "azuredisk-1353/pvc-mdl9v (uid: ae170131-36ab-443b-89b0-6ab8bb3f6bb9)", boundByController: true
I0907 20:34:43.750750       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: volume is bound to claim azuredisk-1353/pvc-mdl9v
I0907 20:34:43.750785       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: claim azuredisk-1353/pvc-mdl9v not found
I0907 20:34:43.750800       1 pv_controller.go:1108] reclaimVolume[pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9]: policy is Delete
I0907 20:34:43.750820       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9[7ae8a1a0-cba9-495d-84a7-b7ae97dd70cf]]
I0907 20:34:43.750861       1 pv_protection_controller.go:205] Got event on PV pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9
I0907 20:34:43.750886       1 pv_protection_controller.go:125] Processing PV pvc-ae170131-36ab-443b-89b0-6ab8bb3f6bb9
... skipping 153 lines ...
I0907 20:34:53.239383       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-kcxqr-7c65c875bd.1712ae1eff761c71, uid 21ec462b-e1c4-45f2-a339-59498b4af6cc, event type delete
I0907 20:34:53.242491       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-kcxqr-7c65c875bd.1712ae2454ac1950, uid e3bfb067-f6c4-4834-b44c-05dd588a9e90, event type delete
I0907 20:34:53.249553       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-kcxqr.1712ae1efeaa8b36, uid 316d89d1-7900-43da-861d-8111febcf7cb, event type delete
I0907 20:34:53.252307       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-mdl9v.1712ae1efc833abf, uid 4192b020-60a4-46b1-9e40-4893bda5c955, event type delete
I0907 20:34:53.258660       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-mdl9v.1712ae1f8ecd1b18, uid 35fa961f-2cbe-416c-9a2d-de1d467154e0, event type delete
I0907 20:34:53.285434       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-br6k8, uid 85946dad-4728-420b-996f-cb613c93f9c0, event type delete
E0907 20:34:53.303083       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-qh5l2" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0907 20:34:53.346775       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1353/default), service account deleted, removing tokens
I0907 20:34:53.347079       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1353, name default, uid a2eb69e7-07e9-4ed8-857a-66bb06813d4a, event type delete
I0907 20:34:53.347137       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (4.1µs)
I0907 20:34:53.366158       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (3.2µs)
I0907 20:34:53.367590       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1353, estimate: 0, errors: <nil>
I0907 20:34:53.376903       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1353" (355.318022ms)
... skipping 391 lines ...
I0907 20:35:08.459581       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-59/pvc-rjgxm] status: set phase Bound
I0907 20:35:08.459666       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-rjgxm] status: phase Bound already set
I0907 20:35:08.459728       1 pv_controller.go:1038] volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" bound to claim "azuredisk-59/pvc-rjgxm"
I0907 20:35:08.459808       1 pv_controller.go:1039] volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:35:08.459871       1 pv_controller.go:1040] claim "azuredisk-59/pvc-rjgxm" status after binding: phase: Bound, bound to: "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d", bindCompleted: true, boundByController: true
I0907 20:35:08.494344       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-6ljzv, uid fcc7fb2c-bfe1-4a9e-bff8-44deb58b2726, event type delete
E0907 20:35:08.507456       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-gmvxm" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0907 20:35:08.557920       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid 33befd75-f81d-4c4f-a542-cc3ecd1a70dd, event type delete
I0907 20:35:08.557968       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0907 20:35:08.557986       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.1µs)
I0907 20:35:08.591594       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8266, name kube-root-ca.crt, uid 236145d4-676d-4af3-9a51-3db03bc99b8b, event type delete
I0907 20:35:08.593606       1 publisher.go:181] Finished syncing namespace "azuredisk-8266" (1.658811ms)
I0907 20:35:08.621859       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (4.7µs)
... skipping 10 lines ...
I0907 20:35:09.112851       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4376, estimate: 0, errors: <nil>
I0907 20:35:09.121435       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4376" (152.617961ms)
I0907 20:35:09.511789       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7996
I0907 20:35:09.560340       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7996, name kube-root-ca.crt, uid 999b7644-be85-46b0-a0b6-58d7534b0285, event type delete
I0907 20:35:09.562558       1 publisher.go:181] Finished syncing namespace "azuredisk-7996" (2.193114ms)
I0907 20:35:09.578643       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7996, name default-token-s2mnk, uid 563afd7f-2773-4c09-a124-e20fcf27e1dc, event type delete
E0907 20:35:09.590694       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7996/default: secrets "default-token-5q48f" is forbidden: unable to create new content in namespace azuredisk-7996 because it is being terminated
I0907 20:35:09.638598       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7996, name default, uid 6687d653-85bd-4f32-a442-de86d0554a89, event type delete
I0907 20:35:09.638688       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (5.7µs)
I0907 20:35:09.639735       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7996/default), service account deleted, removing tokens
I0907 20:35:09.654215       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (2.2µs)
I0907 20:35:09.655736       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7996, estimate: 0, errors: <nil>
I0907 20:35:09.663419       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7996" (155.150177ms)
... skipping 441 lines ...
I0907 20:36:14.467336       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:36:14.467398       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:36:14.467474       1 pv_controller.go:1764] operation "delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]" is already running, skipping
I0907 20:36:14.467567       1 pv_controller.go:1232] deleteVolumeOperation [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d] started
I0907 20:36:14.471295       1 pv_controller.go:1341] isVolumeReleased[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is released
I0907 20:36:14.471483       1 pv_controller.go:1405] doDeleteVolume [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]
I0907 20:36:14.517364       1 pv_controller.go:1260] deletion of volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
I0907 20:36:14.517389       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: set phase Failed
I0907 20:36:14.517400       1 pv_controller.go:858] updating PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: set phase Failed
I0907 20:36:14.521808       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3083
I0907 20:36:14.521845       1 pv_controller.go:879] volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" entered phase "Failed"
I0907 20:36:14.521855       1 pv_controller.go:901] volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
E0907 20:36:14.521915       1 goroutinemap.go:150] Operation for "delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]" failed. No retries permitted until 2022-09-07 20:36:15.02188472 +0000 UTC m=+1175.310265064 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:36:14.522410       1 event.go:291] "Event occurred" object="pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:36:14.522877       1 pv_protection_controller.go:205] Got event on PV pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d
I0907 20:36:14.523029       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3083
I0907 20:36:14.523171       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase: Failed, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:36:14.523300       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is bound to claim azuredisk-59/pvc-rjgxm
I0907 20:36:14.523406       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: claim azuredisk-59/pvc-rjgxm not found
I0907 20:36:14.523501       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:36:14.523584       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:36:14.523678       1 pv_controller.go:1766] operation "delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]" postponed due to exponential backoff
I0907 20:36:18.303557       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 44 items received
... skipping 6 lines ...
I0907 20:36:23.461895       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: volume is bound to claim azuredisk-59/pvc-xrvwt
I0907 20:36:23.461998       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: claim azuredisk-59/pvc-xrvwt found: phase: Bound, bound to: "pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225", bindCompleted: true, boundByController: true
I0907 20:36:23.462057       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: all is bound
I0907 20:36:23.462098       1 pv_controller.go:858] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: set phase Bound
I0907 20:36:23.462145       1 pv_controller.go:861] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: phase Bound already set
I0907 20:36:23.462197       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3083
I0907 20:36:23.462262       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase: Failed, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:36:23.462357       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is bound to claim azuredisk-59/pvc-rjgxm
I0907 20:36:23.462435       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: claim azuredisk-59/pvc-rjgxm not found
I0907 20:36:23.462475       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:36:23.462539       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:36:23.462603       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" with version 2940
I0907 20:36:23.462651       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: phase: Bound, bound to: "azuredisk-59/pvc-mchwc (uid: e6594975-52cb-4cf4-b4db-bf8983555230)", boundByController: true
... skipping 34 lines ...
I0907 20:36:23.464735       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-mchwc] status: phase Bound already set
I0907 20:36:23.464773       1 pv_controller.go:1038] volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" bound to claim "azuredisk-59/pvc-mchwc"
I0907 20:36:23.464812       1 pv_controller.go:1039] volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-mchwc (uid: e6594975-52cb-4cf4-b4db-bf8983555230)", boundByController: true
I0907 20:36:23.464860       1 pv_controller.go:1040] claim "azuredisk-59/pvc-mchwc" status after binding: phase: Bound, bound to: "pvc-e6594975-52cb-4cf4-b4db-bf8983555230", bindCompleted: true, boundByController: true
I0907 20:36:23.473432       1 pv_controller.go:1341] isVolumeReleased[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is released
I0907 20:36:23.473634       1 pv_controller.go:1405] doDeleteVolume [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]
I0907 20:36:23.518621       1 pv_controller.go:1260] deletion of volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
I0907 20:36:23.518729       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: set phase Failed
I0907 20:36:23.518769       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase Failed already set
E0907 20:36:23.518920       1 goroutinemap.go:150] Operation for "delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]" failed. No retries permitted until 2022-09-07 20:36:24.518821643 +0000 UTC m=+1184.807201987 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:36:24.233349       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000000"
I0907 20:36:24.234500       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-e6594975-52cb-4cf4-b4db-bf8983555230 to the node "capz-kddm5f-mp-0000000" mounted false
I0907 20:36:24.234529       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d to the node "capz-kddm5f-mp-0000000" mounted false
I0907 20:36:24.234539       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225 to the node "capz-kddm5f-mp-0000000" mounted false
I0907 20:36:24.256184       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d\"},{\"devicePath\":\"2\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225\"}]}}" for node "capz-kddm5f-mp-0000000" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d 1} {kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225 2}]
I0907 20:36:24.256576       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-e6594975-52cb-4cf4-b4db-bf8983555230") on node "capz-kddm5f-mp-0000000" 
... skipping 65 lines ...
I0907 20:36:38.463641       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: volume is bound to claim azuredisk-59/pvc-xrvwt
I0907 20:36:38.463706       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: claim azuredisk-59/pvc-xrvwt found: phase: Bound, bound to: "pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225", bindCompleted: true, boundByController: true
I0907 20:36:38.463729       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: all is bound
I0907 20:36:38.463761       1 pv_controller.go:858] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: set phase Bound
I0907 20:36:38.463771       1 pv_controller.go:861] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: phase Bound already set
I0907 20:36:38.463806       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3083
I0907 20:36:38.463853       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase: Failed, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:36:38.463887       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is bound to claim azuredisk-59/pvc-rjgxm
I0907 20:36:38.463910       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: claim azuredisk-59/pvc-rjgxm not found
I0907 20:36:38.463925       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:36:38.463947       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:36:38.463979       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" with version 2940
I0907 20:36:38.464002       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: phase: Bound, bound to: "azuredisk-59/pvc-mchwc (uid: e6594975-52cb-4cf4-b4db-bf8983555230)", boundByController: true
... skipping 2 lines ...
I0907 20:36:38.464068       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: all is bound
I0907 20:36:38.464076       1 pv_controller.go:858] updating PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: set phase Bound
I0907 20:36:38.464086       1 pv_controller.go:861] updating PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: phase Bound already set
I0907 20:36:38.464122       1 pv_controller.go:1232] deleteVolumeOperation [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d] started
I0907 20:36:38.473491       1 pv_controller.go:1341] isVolumeReleased[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is released
I0907 20:36:38.473513       1 pv_controller.go:1405] doDeleteVolume [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]
I0907 20:36:38.500705       1 pv_controller.go:1260] deletion of volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
I0907 20:36:38.500740       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: set phase Failed
I0907 20:36:38.500751       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase Failed already set
E0907 20:36:38.500792       1 goroutinemap.go:150] Operation for "delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]" failed. No retries permitted until 2022-09-07 20:36:40.500761829 +0000 UTC m=+1200.789142173 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:36:39.565093       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-e6594975-52cb-4cf4-b4db-bf8983555230) returned with <nil>
I0907 20:36:39.565182       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-e6594975-52cb-4cf4-b4db-bf8983555230) succeeded
I0907 20:36:39.565198       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-e6594975-52cb-4cf4-b4db-bf8983555230 was detached from node:capz-kddm5f-mp-0000000
I0907 20:36:39.565466       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-kddm5f-mp-0000000, refreshing the cache
I0907 20:36:39.565700       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-e6594975-52cb-4cf4-b4db-bf8983555230") on node "capz-kddm5f-mp-0000000" 
I0907 20:36:39.634764       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d"
... skipping 52 lines ...
I0907 20:36:53.464838       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: volume is bound to claim azuredisk-59/pvc-xrvwt
I0907 20:36:53.464855       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: claim azuredisk-59/pvc-xrvwt found: phase: Bound, bound to: "pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225", bindCompleted: true, boundByController: true
I0907 20:36:53.464870       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: all is bound
I0907 20:36:53.464882       1 pv_controller.go:858] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: set phase Bound
I0907 20:36:53.464891       1 pv_controller.go:861] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: phase Bound already set
I0907 20:36:53.465017       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3083
I0907 20:36:53.465039       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase: Failed, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:36:53.465083       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is bound to claim azuredisk-59/pvc-rjgxm
I0907 20:36:53.465104       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: claim azuredisk-59/pvc-rjgxm not found
I0907 20:36:53.465113       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:36:53.465133       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:36:53.465172       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" with version 2940
I0907 20:36:53.465191       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: phase: Bound, bound to: "azuredisk-59/pvc-mchwc (uid: e6594975-52cb-4cf4-b4db-bf8983555230)", boundByController: true
... skipping 2 lines ...
I0907 20:36:53.465261       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: all is bound
I0907 20:36:53.465268       1 pv_controller.go:858] updating PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: set phase Bound
I0907 20:36:53.465278       1 pv_controller.go:861] updating PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: phase Bound already set
I0907 20:36:53.465309       1 pv_controller.go:1232] deleteVolumeOperation [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d] started
I0907 20:36:53.473500       1 pv_controller.go:1341] isVolumeReleased[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is released
I0907 20:36:53.473526       1 pv_controller.go:1405] doDeleteVolume [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]
I0907 20:36:53.473576       1 pv_controller.go:1260] deletion of volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) since it's in attaching or detaching state
I0907 20:36:53.473596       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: set phase Failed
I0907 20:36:53.473608       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase Failed already set
E0907 20:36:53.473673       1 goroutinemap.go:150] Operation for "delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]" failed. No retries permitted until 2022-09-07 20:36:57.473619539 +0000 UTC m=+1217.761999783 (durationBeforeRetry 4s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) since it's in attaching or detaching state"
I0907 20:36:54.329077       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:36:54.913225       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) returned with <nil>
I0907 20:36:54.913327       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d) succeeded
I0907 20:36:54.913349       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d was detached from node:capz-kddm5f-mp-0000000
I0907 20:36:54.913376       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d") on node "capz-kddm5f-mp-0000000" 
I0907 20:36:54.913604       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-kddm5f-mp-0000000, refreshing the cache
... skipping 9 lines ...
I0907 20:37:08.466309       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: volume is bound to claim azuredisk-59/pvc-xrvwt
I0907 20:37:08.466332       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: claim azuredisk-59/pvc-xrvwt found: phase: Bound, bound to: "pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225", bindCompleted: true, boundByController: true
I0907 20:37:08.466353       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: all is bound
I0907 20:37:08.466370       1 pv_controller.go:858] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: set phase Bound
I0907 20:37:08.466381       1 pv_controller.go:861] updating PersistentVolume[pvc-a1c7bfc0-f421-4bbb-ad52-9dcefe83c225]: phase Bound already set
I0907 20:37:08.466400       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3083
I0907 20:37:08.466422       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase: Failed, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:37:08.466444       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is bound to claim azuredisk-59/pvc-rjgxm
I0907 20:37:08.466470       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: claim azuredisk-59/pvc-rjgxm not found
I0907 20:37:08.466482       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:37:08.466502       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:37:08.466524       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e6594975-52cb-4cf4-b4db-bf8983555230" with version 2940
I0907 20:37:08.466602       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e6594975-52cb-4cf4-b4db-bf8983555230]: phase: Bound, bound to: "azuredisk-59/pvc-mchwc (uid: e6594975-52cb-4cf4-b4db-bf8983555230)", boundByController: true
... skipping 47 lines ...
I0907 20:37:13.684431       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d
I0907 20:37:13.684485       1 pv_controller.go:1436] volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" deleted
I0907 20:37:13.684503       1 pv_controller.go:1284] deleteVolumeOperation [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: success
I0907 20:37:13.697824       1 pv_protection_controller.go:205] Got event on PV pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d
I0907 20:37:13.697867       1 pv_protection_controller.go:125] Processing PV pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d
I0907 20:37:13.698506       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" with version 3173
I0907 20:37:13.698545       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: phase: Failed, bound to: "azuredisk-59/pvc-rjgxm (uid: 17d50d61-ec81-4d49-901a-ac18e7f5875d)", boundByController: true
I0907 20:37:13.698574       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: volume is bound to claim azuredisk-59/pvc-rjgxm
I0907 20:37:13.698593       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: claim azuredisk-59/pvc-rjgxm not found
I0907 20:37:13.698603       1 pv_controller.go:1108] reclaimVolume[pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d]: policy is Delete
I0907 20:37:13.698621       1 pv_controller.go:1753] scheduleOperation[delete-pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d[8924a86a-e26e-443a-979a-a5392488c562]]
I0907 20:37:13.698647       1 pv_controller.go:1232] deleteVolumeOperation [pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d] started
I0907 20:37:13.706378       1 pv_controller.go:1244] Volume "pvc-17d50d61-ec81-4d49-901a-ac18e7f5875d" is already being deleted
... skipping 381 lines ...
I0907 20:37:40.547041       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-rjgxm.1712ae588934aaa7, uid eb9bcf24-4299-4800-98c3-e7ed8ba8cedc, event type delete
I0907 20:37:40.550504       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-xrvwt.1712ae57e7f9ae73, uid 656e7be3-2e59-47ae-a2cb-2c3e6340d4a0, event type delete
I0907 20:37:40.555402       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-xrvwt.1712ae5881bf6e3f, uid 1595105c-8b0c-40b2-b753-e0a2d5dab5a5, event type delete
I0907 20:37:40.569608       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-59, name default-token-qp5sc, uid 413f81c7-f22d-4276-9d41-5253f93c6912, event type delete
I0907 20:37:40.581244       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-59, name kube-root-ca.crt, uid f17623b3-8db5-413d-939a-9d4b98a1343f, event type delete
I0907 20:37:40.589595       1 publisher.go:181] Finished syncing namespace "azuredisk-59" (8.122452ms)
E0907 20:37:40.591902       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-59/default: secrets "default-token-768js" is forbidden: unable to create new content in namespace azuredisk-59 because it is being terminated
I0907 20:37:40.627407       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-59, name default, uid b14a7abf-ec48-46c7-a860-44a629ae1edd, event type delete
I0907 20:37:40.627407       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-59" (4.4µs)
I0907 20:37:40.627535       1 tokens_controller.go:252] syncServiceAccount(azuredisk-59/default), service account deleted, removing tokens
I0907 20:37:40.676072       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-59, estimate: 0, errors: <nil>
I0907 20:37:40.676795       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-59" (3.9µs)
I0907 20:37:40.686025       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-59" (244.130862ms)
... skipping 220 lines ...
I0907 20:38:14.986226       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: claim azuredisk-2546/pvc-8fdc7 not found
I0907 20:38:14.986275       1 pv_controller.go:1108] reclaimVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: policy is Delete
I0907 20:38:14.986332       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]]
I0907 20:38:14.986368       1 pv_controller.go:1764] operation "delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]" is already running, skipping
I0907 20:38:14.988348       1 pv_controller.go:1341] isVolumeReleased[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is released
I0907 20:38:14.988370       1 pv_controller.go:1405] doDeleteVolume [pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]
I0907 20:38:15.015486       1 pv_controller.go:1260] deletion of volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:38:15.015514       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: set phase Failed
I0907 20:38:15.015523       1 pv_controller.go:858] updating PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: set phase Failed
I0907 20:38:15.019626       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" with version 3347
I0907 20:38:15.019656       1 pv_controller.go:879] volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" entered phase "Failed"
I0907 20:38:15.019970       1 pv_protection_controller.go:205] Got event on PV pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799
I0907 20:38:15.020018       1 pv_controller.go:901] volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:38:15.020141       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" with version 3347
I0907 20:38:15.020412       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase: Failed, bound to: "azuredisk-2546/pvc-8fdc7 (uid: 4464ea21-cf51-4f4d-8ce8-dcda3e836799)", boundByController: true
I0907 20:38:15.020469       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is bound to claim azuredisk-2546/pvc-8fdc7
I0907 20:38:15.020489       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: claim azuredisk-2546/pvc-8fdc7 not found
I0907 20:38:15.020498       1 pv_controller.go:1108] reclaimVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: policy is Delete
I0907 20:38:15.020512       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]]
E0907 20:38:15.020228       1 goroutinemap.go:150] Operation for "delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]" failed. No retries permitted until 2022-09-07 20:38:15.520076238 +0000 UTC m=+1295.808456582 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:38:15.020647       1 pv_controller.go:1766] operation "delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]" postponed due to exponential backoff
I0907 20:38:15.020354       1 event.go:291] "Event occurred" object="pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:38:15.529832       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:38:15.530017       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-90470455-732b-41ef-8ee1-9ff20411e027 to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:38:15.530036       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 to the node "capz-kddm5f-mp-0000001" mounted true
I0907 20:38:15.580560       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
... skipping 12 lines ...
I0907 20:38:20.207181       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 6 items received
I0907 20:38:21.169308       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="97.2µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:37214" resp=200
I0907 20:38:23.195750       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:23.236621       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:23.469846       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:23.470063       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" with version 3347
I0907 20:38:23.470157       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase: Failed, bound to: "azuredisk-2546/pvc-8fdc7 (uid: 4464ea21-cf51-4f4d-8ce8-dcda3e836799)", boundByController: true
I0907 20:38:23.470267       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is bound to claim azuredisk-2546/pvc-8fdc7
I0907 20:38:23.470344       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: claim azuredisk-2546/pvc-8fdc7 not found
I0907 20:38:23.470398       1 pv_controller.go:1108] reclaimVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: policy is Delete
I0907 20:38:23.470455       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]]
I0907 20:38:23.470522       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" with version 3249
I0907 20:38:23.470567       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-2vqhh" with version 3251
... skipping 18 lines ...
I0907 20:38:23.475223       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-2vqhh] status: phase Bound already set
I0907 20:38:23.475262       1 pv_controller.go:1038] volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" bound to claim "azuredisk-2546/pvc-2vqhh"
I0907 20:38:23.475284       1 pv_controller.go:1039] volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-2vqhh (uid: 90470455-732b-41ef-8ee1-9ff20411e027)", boundByController: true
I0907 20:38:23.475370       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-2vqhh" status after binding: phase: Bound, bound to: "pvc-90470455-732b-41ef-8ee1-9ff20411e027", bindCompleted: true, boundByController: true
I0907 20:38:23.481685       1 pv_controller.go:1341] isVolumeReleased[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is released
I0907 20:38:23.481835       1 pv_controller.go:1405] doDeleteVolume [pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]
I0907 20:38:23.522646       1 pv_controller.go:1260] deletion of volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted
I0907 20:38:23.522680       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: set phase Failed
I0907 20:38:23.522692       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase Failed already set
E0907 20:38:23.522736       1 goroutinemap.go:150] Operation for "delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]" failed. No retries permitted until 2022-09-07 20:38:24.522704379 +0000 UTC m=+1304.811084623 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_1), could not be deleted"
I0907 20:38:24.385254       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:38:25.545512       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:38:25.545557       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:38:25.545569       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-90470455-732b-41ef-8ee1-9ff20411e027 to the node "capz-kddm5f-mp-0000001" mounted false
I0907 20:38:25.648279       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-mp-0000001"
I0907 20:38:25.648457       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-90470455-732b-41ef-8ee1-9ff20411e027 to the node "capz-kddm5f-mp-0000001" mounted false
... skipping 23 lines ...
I0907 20:38:38.237173       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:38.470454       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:38.470800       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" with version 3347
I0907 20:38:38.470885       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-2vqhh" with version 3251
I0907 20:38:38.470909       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-2vqhh]: phase: Bound, bound to: "pvc-90470455-732b-41ef-8ee1-9ff20411e027", bindCompleted: true, boundByController: true
I0907 20:38:38.470953       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-2vqhh]: volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" found: phase: Bound, bound to: "azuredisk-2546/pvc-2vqhh (uid: 90470455-732b-41ef-8ee1-9ff20411e027)", boundByController: true
I0907 20:38:38.470956       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase: Failed, bound to: "azuredisk-2546/pvc-8fdc7 (uid: 4464ea21-cf51-4f4d-8ce8-dcda3e836799)", boundByController: true
I0907 20:38:38.470963       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-2vqhh]: claim is already correctly bound
I0907 20:38:38.470973       1 pv_controller.go:1012] binding volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" to claim "azuredisk-2546/pvc-2vqhh"
I0907 20:38:38.470984       1 pv_controller.go:910] updating PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: binding to "azuredisk-2546/pvc-2vqhh"
I0907 20:38:38.470995       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is bound to claim azuredisk-2546/pvc-8fdc7
I0907 20:38:38.471005       1 pv_controller.go:922] updating PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: already bound to "azuredisk-2546/pvc-2vqhh"
I0907 20:38:38.471014       1 pv_controller.go:858] updating PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: set phase Bound
... skipping 15 lines ...
I0907 20:38:38.471160       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: claim azuredisk-2546/pvc-2vqhh found: phase: Bound, bound to: "pvc-90470455-732b-41ef-8ee1-9ff20411e027", bindCompleted: true, boundByController: true
I0907 20:38:38.471173       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: all is bound
I0907 20:38:38.471196       1 pv_controller.go:858] updating PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: set phase Bound
I0907 20:38:38.471203       1 pv_controller.go:861] updating PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: phase Bound already set
I0907 20:38:38.478923       1 pv_controller.go:1341] isVolumeReleased[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is released
I0907 20:38:38.478946       1 pv_controller.go:1405] doDeleteVolume [pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]
I0907 20:38:38.478987       1 pv_controller.go:1260] deletion of volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) since it's in attaching or detaching state
I0907 20:38:38.479125       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: set phase Failed
I0907 20:38:38.479139       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase Failed already set
E0907 20:38:38.479205       1 goroutinemap.go:150] Operation for "delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]" failed. No retries permitted until 2022-09-07 20:38:40.479154254 +0000 UTC m=+1320.767534498 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) since it's in attaching or detaching state"
I0907 20:38:41.169043       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="85.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48790" resp=200
I0907 20:38:51.169770       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="137.201µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:42164" resp=200
I0907 20:38:51.530341       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) returned with <nil>
I0907 20:38:51.530413       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799) succeeded
I0907 20:38:51.530426       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799 was detached from node:capz-kddm5f-mp-0000001
I0907 20:38:51.530454       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799") on node "capz-kddm5f-mp-0000001" 
... skipping 16 lines ...
I0907 20:38:53.472288       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-2546/pvc-2vqhh] status: set phase Bound
I0907 20:38:53.472314       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-2vqhh] status: phase Bound already set
I0907 20:38:53.472324       1 pv_controller.go:1038] volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" bound to claim "azuredisk-2546/pvc-2vqhh"
I0907 20:38:53.472338       1 pv_controller.go:1039] volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-2vqhh (uid: 90470455-732b-41ef-8ee1-9ff20411e027)", boundByController: true
I0907 20:38:53.472352       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-2vqhh" status after binding: phase: Bound, bound to: "pvc-90470455-732b-41ef-8ee1-9ff20411e027", bindCompleted: true, boundByController: true
I0907 20:38:53.471953       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" with version 3347
I0907 20:38:53.472410       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase: Failed, bound to: "azuredisk-2546/pvc-8fdc7 (uid: 4464ea21-cf51-4f4d-8ce8-dcda3e836799)", boundByController: true
I0907 20:38:53.472438       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is bound to claim azuredisk-2546/pvc-8fdc7
I0907 20:38:53.472772       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: claim azuredisk-2546/pvc-8fdc7 not found
I0907 20:38:53.472795       1 pv_controller.go:1108] reclaimVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: policy is Delete
I0907 20:38:53.472836       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]]
I0907 20:38:53.472872       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90470455-732b-41ef-8ee1-9ff20411e027" with version 3249
I0907 20:38:53.472929       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-90470455-732b-41ef-8ee1-9ff20411e027]: phase: Bound, bound to: "azuredisk-2546/pvc-2vqhh (uid: 90470455-732b-41ef-8ee1-9ff20411e027)", boundByController: true
... skipping 10 lines ...
I0907 20:38:58.746685       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799
I0907 20:38:58.746732       1 pv_controller.go:1436] volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" deleted
I0907 20:38:58.746749       1 pv_controller.go:1284] deleteVolumeOperation [pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: success
I0907 20:38:58.757920       1 pv_protection_controller.go:205] Got event on PV pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799
I0907 20:38:58.758182       1 pv_protection_controller.go:125] Processing PV pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799
I0907 20:38:58.758314       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799" with version 3416
I0907 20:38:58.758498       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: phase: Failed, bound to: "azuredisk-2546/pvc-8fdc7 (uid: 4464ea21-cf51-4f4d-8ce8-dcda3e836799)", boundByController: true
I0907 20:38:58.758658       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: volume is bound to claim azuredisk-2546/pvc-8fdc7
I0907 20:38:58.758804       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: claim azuredisk-2546/pvc-8fdc7 not found
I0907 20:38:58.758925       1 pv_controller.go:1108] reclaimVolume[pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799]: policy is Delete
I0907 20:38:58.759104       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]]
I0907 20:38:58.759206       1 pv_controller.go:1764] operation "delete-pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799[3c2ff19f-2710-4673-9b1f-26388cc3306a]" is already running, skipping
I0907 20:38:58.764186       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-4464ea21-cf51-4f4d-8ce8-dcda3e836799
... skipping 360 lines ...
I0907 20:39:15.744377       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" to node "capz-kddm5f-mp-0000000".
I0907 20:39:15.774148       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2546, name kube-root-ca.crt, uid f11eb504-cdbc-4425-b532-7b61b38e4147, event type delete
I0907 20:39:15.778998       1 publisher.go:181] Finished syncing namespace "azuredisk-2546" (4.621629ms)
I0907 20:39:15.794671       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" lun 0 to node "capz-kddm5f-mp-0000000".
I0907 20:39:15.794911       1 azure_controller_vmss.go:101] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - attach disk(capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) with DiskEncryptionSetID()
I0907 20:39:15.817203       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2546, name default-token-ckvbh, uid 8c7bd290-a96a-4b1f-8035-b20857ebf7c4, event type delete
E0907 20:39:15.829859       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2546/default: secrets "default-token-s8rf4" is forbidden: unable to create new content in namespace azuredisk-2546 because it is being terminated
I0907 20:39:15.864516       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2546, name default, uid 14f440e2-4c5b-4926-b79d-5d17e07109b5, event type delete
I0907 20:39:15.864846       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2546" (4.5µs)
I0907 20:39:15.864955       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2546/default), service account deleted, removing tokens
I0907 20:39:15.904759       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2546" (4.701µs)
I0907 20:39:15.905640       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2546, estimate: 0, errors: <nil>
I0907 20:39:15.915108       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2546" (267.660614ms)
I0907 20:39:16.284229       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1598
I0907 20:39:16.323681       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1598, name default-token-b5shw, uid d8eb62ee-f11e-4950-8572-77b0e69c1fe1, event type delete
E0907 20:39:16.337263       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1598/default: secrets "default-token-v5f4f" is forbidden: unable to create new content in namespace azuredisk-1598 because it is being terminated
I0907 20:39:16.348947       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1598, name kube-root-ca.crt, uid d811be80-0e4b-47f5-bf2b-de6dce34ea21, event type delete
I0907 20:39:16.350628       1 publisher.go:181] Finished syncing namespace "azuredisk-1598" (1.374309ms)
I0907 20:39:16.425870       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1598/default), service account deleted, removing tokens
I0907 20:39:16.426181       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (4µs)
I0907 20:39:16.426121       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1598, name default, uid ae2fc104-c0aa-44fb-bd3a-cbf5fc1dfb18, event type delete
I0907 20:39:16.442475       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (4.6µs)
... skipping 540 lines ...
I0907 20:40:23.943238       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: claim azuredisk-8582/pvc-klggx not found
I0907 20:40:23.943254       1 pv_controller.go:1108] reclaimVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: policy is Delete
I0907 20:40:23.943444       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]]
I0907 20:40:23.943459       1 pv_controller.go:1764] operation "delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]" is already running, skipping
I0907 20:40:23.950794       1 pv_controller.go:1341] isVolumeReleased[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is released
I0907 20:40:23.950815       1 pv_controller.go:1405] doDeleteVolume [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]
I0907 20:40:24.013503       1 pv_controller.go:1260] deletion of volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
I0907 20:40:24.013543       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: set phase Failed
I0907 20:40:24.013554       1 pv_controller.go:858] updating PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: set phase Failed
I0907 20:40:24.018072       1 pv_protection_controller.go:205] Got event on PV pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222
I0907 20:40:24.018348       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" with version 3645
I0907 20:40:24.018516       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase: Failed, bound to: "azuredisk-8582/pvc-klggx (uid: d22f2378-76ab-405b-a8e4-bc2e7bacf222)", boundByController: true
I0907 20:40:24.018704       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is bound to claim azuredisk-8582/pvc-klggx
I0907 20:40:24.018807       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: claim azuredisk-8582/pvc-klggx not found
I0907 20:40:24.018892       1 pv_controller.go:1108] reclaimVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: policy is Delete
I0907 20:40:24.018997       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]]
I0907 20:40:24.019204       1 pv_controller.go:1764] operation "delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]" is already running, skipping
I0907 20:40:24.019918       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" with version 3645
I0907 20:40:24.019944       1 pv_controller.go:879] volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" entered phase "Failed"
I0907 20:40:24.019954       1 pv_controller.go:901] volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted
E0907 20:40:24.020052       1 goroutinemap.go:150] Operation for "delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]" failed. No retries permitted until 2022-09-07 20:40:24.519984888 +0000 UTC m=+1424.808365132 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:40:24.020275       1 event.go:291] "Event occurred" object="pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/virtualMachineScaleSets/capz-kddm5f-mp-0/virtualMachines/capz-kddm5f-mp-0_0), could not be deleted"
I0907 20:40:24.462500       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:40:31.170035       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="118.301µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:45280" resp=200
I0907 20:40:31.475084       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0907 20:40:33.401170       1 gc_controller.go:161] GC'ing orphaned
I0907 20:40:33.401491       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 38 lines ...
I0907 20:40:38.476308       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" with version 3645
I0907 20:40:38.476900       1 pv_controller.go:910] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: binding to "azuredisk-8582/pvc-scm8s"
I0907 20:40:38.477003       1 pv_controller.go:922] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: already bound to "azuredisk-8582/pvc-scm8s"
I0907 20:40:38.477107       1 pv_controller.go:858] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: set phase Bound
I0907 20:40:38.477144       1 pv_controller.go:861] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: phase Bound already set
I0907 20:40:38.477194       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-8582/pvc-scm8s]: binding to "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc"
I0907 20:40:38.477004       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase: Failed, bound to: "azuredisk-8582/pvc-klggx (uid: d22f2378-76ab-405b-a8e4-bc2e7bacf222)", boundByController: true
I0907 20:40:38.477318       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8582/pvc-scm8s]: already bound to "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc"
I0907 20:40:38.477418       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-scm8s] status: set phase Bound
I0907 20:40:38.477512       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-scm8s] status: phase Bound already set
I0907 20:40:38.477628       1 pv_controller.go:1038] volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" bound to claim "azuredisk-8582/pvc-scm8s"
I0907 20:40:38.477683       1 pv_controller.go:1039] volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-scm8s (uid: 2ecf8ed0-5b70-448d-82de-74cf0e01d7cc)", boundByController: true
I0907 20:40:38.477735       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-scm8s" status after binding: phase: Bound, bound to: "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc", bindCompleted: true, boundByController: true
... skipping 31 lines ...
I0907 20:40:38.479440       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: all is bound
I0907 20:40:38.479467       1 pv_controller.go:858] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: set phase Bound
I0907 20:40:38.479518       1 pv_controller.go:861] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: phase Bound already set
I0907 20:40:38.479195       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222] started
I0907 20:40:38.487995       1 pv_controller.go:1341] isVolumeReleased[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is released
I0907 20:40:38.488016       1 pv_controller.go:1405] doDeleteVolume [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]
I0907 20:40:38.488059       1 pv_controller.go:1260] deletion of volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) since it's in attaching or detaching state
I0907 20:40:38.488075       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: set phase Failed
I0907 20:40:38.488086       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase Failed already set
E0907 20:40:38.488143       1 goroutinemap.go:150] Operation for "delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]" failed. No retries permitted until 2022-09-07 20:40:39.488096391 +0000 UTC m=+1439.776476735 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) since it's in attaching or detaching state"
I0907 20:40:38.621971       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 20:40:41.169424       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="92.8µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:59090" resp=200
I0907 20:40:45.514770       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0907 20:40:51.170172       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="91.3µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46540" resp=200
I0907 20:40:51.452486       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 9 items received
I0907 20:40:53.198453       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 33 lines ...
I0907 20:40:53.477830       1 pv_controller.go:910] updating PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: binding to "azuredisk-8582/pvc-7m29q"
I0907 20:40:53.477854       1 pv_controller.go:922] updating PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: already bound to "azuredisk-8582/pvc-7m29q"
I0907 20:40:53.477861       1 pv_controller.go:858] updating PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: set phase Bound
I0907 20:40:53.477888       1 pv_controller.go:861] updating PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: phase Bound already set
I0907 20:40:53.477901       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-8582/pvc-7m29q]: binding to "pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2"
I0907 20:40:53.477924       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8582/pvc-7m29q]: already bound to "pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2"
I0907 20:40:53.477987       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase: Failed, bound to: "azuredisk-8582/pvc-klggx (uid: d22f2378-76ab-405b-a8e4-bc2e7bacf222)", boundByController: true
I0907 20:40:53.478019       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is bound to claim azuredisk-8582/pvc-klggx
I0907 20:40:53.478038       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: claim azuredisk-8582/pvc-klggx not found
I0907 20:40:53.478130       1 pv_controller.go:1108] reclaimVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: policy is Delete
I0907 20:40:53.478291       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]]
I0907 20:40:53.478319       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2" with version 3488
I0907 20:40:53.478107       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-7m29q] status: set phase Bound
... skipping 7 lines ...
I0907 20:40:53.478810       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: all is bound
I0907 20:40:53.478818       1 pv_controller.go:858] updating PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: set phase Bound
I0907 20:40:53.478827       1 pv_controller.go:861] updating PersistentVolume[pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2]: phase Bound already set
I0907 20:40:53.478600       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222] started
I0907 20:40:53.485569       1 pv_controller.go:1341] isVolumeReleased[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is released
I0907 20:40:53.485590       1 pv_controller.go:1405] doDeleteVolume [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]
I0907 20:40:53.485652       1 pv_controller.go:1260] deletion of volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) since it's in attaching or detaching state
I0907 20:40:53.485688       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: set phase Failed
I0907 20:40:53.485698       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase Failed already set
E0907 20:40:53.485783       1 goroutinemap.go:150] Operation for "delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]" failed. No retries permitted until 2022-09-07 20:40:55.485751514 +0000 UTC m=+1455.774131858 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) since it's in attaching or detaching state"
I0907 20:40:54.481629       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 20:40:55.019594       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) returned with <nil>
I0907 20:40:55.019671       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222) succeeded
I0907 20:40:55.019685       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222 was detached from node:capz-kddm5f-mp-0000000
I0907 20:40:55.019869       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222") on node "capz-kddm5f-mp-0000000" 
I0907 20:40:55.019875       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-kddm5f-mp-0000000, refreshing the cache
... skipping 22 lines ...
I0907 20:41:08.478623       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: volume is bound to claim azuredisk-8582/pvc-scm8s
I0907 20:41:08.478703       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: claim azuredisk-8582/pvc-scm8s found: phase: Bound, bound to: "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc", bindCompleted: true, boundByController: true
I0907 20:41:08.478756       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: all is bound
I0907 20:41:08.478796       1 pv_controller.go:858] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: set phase Bound
I0907 20:41:08.478829       1 pv_controller.go:861] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: phase Bound already set
I0907 20:41:08.478869       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" with version 3645
I0907 20:41:08.478943       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase: Failed, bound to: "azuredisk-8582/pvc-klggx (uid: d22f2378-76ab-405b-a8e4-bc2e7bacf222)", boundByController: true
I0907 20:41:08.479023       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is bound to claim azuredisk-8582/pvc-klggx
I0907 20:41:08.478549       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-8582/pvc-scm8s]: binding to "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc"
I0907 20:41:08.479079       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: claim azuredisk-8582/pvc-klggx not found
I0907 20:41:08.479135       1 pv_controller.go:1108] reclaimVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: policy is Delete
I0907 20:41:08.479202       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]]
I0907 20:41:08.479252       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a2093480-e83c-4283-a781-ebe3fbb36ee2" with version 3488
... skipping 42 lines ...
I0907 20:41:13.719277       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222
I0907 20:41:13.719324       1 pv_controller.go:1436] volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" deleted
I0907 20:41:13.719339       1 pv_controller.go:1284] deleteVolumeOperation [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: success
I0907 20:41:13.729368       1 pv_protection_controller.go:205] Got event on PV pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222
I0907 20:41:13.729487       1 pv_protection_controller.go:125] Processing PV pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222
I0907 20:41:13.729435       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" with version 3722
I0907 20:41:13.729544       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: phase: Failed, bound to: "azuredisk-8582/pvc-klggx (uid: d22f2378-76ab-405b-a8e4-bc2e7bacf222)", boundByController: true
I0907 20:41:13.729570       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: volume is bound to claim azuredisk-8582/pvc-klggx
I0907 20:41:13.729588       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: claim azuredisk-8582/pvc-klggx not found
I0907 20:41:13.729612       1 pv_controller.go:1108] reclaimVolume[pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222]: policy is Delete
I0907 20:41:13.729634       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222[390e89e2-4f8c-453a-9015-41c22d8e6456]]
I0907 20:41:13.729658       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222] started
I0907 20:41:13.734103       1 pv_controller.go:1244] Volume "pvc-d22f2378-76ab-405b-a8e4-bc2e7bacf222" is already being deleted
... skipping 138 lines ...
I0907 20:41:24.675174       1 pv_controller.go:1108] reclaimVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: policy is Delete
I0907 20:41:24.675187       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]]
I0907 20:41:24.675302       1 pv_controller.go:1764] operation "delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]" is already running, skipping
I0907 20:41:24.675414       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc] started
I0907 20:41:24.677349       1 pv_controller.go:1341] isVolumeReleased[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: volume is released
I0907 20:41:24.677371       1 pv_controller.go:1405] doDeleteVolume [pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]
I0907 20:41:24.677455       1 pv_controller.go:1260] deletion of volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc) since it's in attaching or detaching state
I0907 20:41:24.677470       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: set phase Failed
I0907 20:41:24.677528       1 pv_controller.go:858] updating PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: set phase Failed
I0907 20:41:24.681995       1 pv_protection_controller.go:205] Got event on PV pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc
I0907 20:41:24.682217       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" with version 3748
I0907 20:41:24.682524       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: phase: Failed, bound to: "azuredisk-8582/pvc-scm8s (uid: 2ecf8ed0-5b70-448d-82de-74cf0e01d7cc)", boundByController: true
I0907 20:41:24.682566       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: volume is bound to claim azuredisk-8582/pvc-scm8s
I0907 20:41:24.682586       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: claim azuredisk-8582/pvc-scm8s not found
I0907 20:41:24.682598       1 pv_controller.go:1108] reclaimVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: policy is Delete
I0907 20:41:24.682611       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]]
I0907 20:41:24.682621       1 pv_controller.go:1764] operation "delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]" is already running, skipping
I0907 20:41:24.682637       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" with version 3748
I0907 20:41:24.682650       1 pv_controller.go:879] volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" entered phase "Failed"
I0907 20:41:24.682659       1 pv_controller.go:901] volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc) since it's in attaching or detaching state
E0907 20:41:24.682712       1 goroutinemap.go:150] Operation for "delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]" failed. No retries permitted until 2022-09-07 20:41:25.182681364 +0000 UTC m=+1485.471061608 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc) since it's in attaching or detaching state"
I0907 20:41:24.682908       1 event.go:291] "Event occurred" object="pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc) since it's in attaching or detaching state"
I0907 20:41:25.890165       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc) returned with <nil>
I0907 20:41:25.890242       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc) succeeded
I0907 20:41:25.890255       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc was detached from node:capz-kddm5f-mp-0000000
I0907 20:41:25.890284       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc") on node "capz-kddm5f-mp-0000000" 
I0907 20:41:31.169239       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="84.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:40150" resp=200
I0907 20:41:33.403572       1 gc_controller.go:161] GC'ing orphaned
I0907 20:41:33.403615       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:41:34.690157       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0907 20:41:38.243981       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:41:38.478724       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:41:38.478860       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" with version 3748
I0907 20:41:38.478939       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: phase: Failed, bound to: "azuredisk-8582/pvc-scm8s (uid: 2ecf8ed0-5b70-448d-82de-74cf0e01d7cc)", boundByController: true
I0907 20:41:38.478998       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: volume is bound to claim azuredisk-8582/pvc-scm8s
I0907 20:41:38.479024       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: claim azuredisk-8582/pvc-scm8s not found
I0907 20:41:38.479041       1 pv_controller.go:1108] reclaimVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: policy is Delete
I0907 20:41:38.479065       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]]
I0907 20:41:38.479123       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc] started
I0907 20:41:38.488043       1 pv_controller.go:1341] isVolumeReleased[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: volume is released
... skipping 3 lines ...
I0907 20:41:43.697833       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc
I0907 20:41:43.697874       1 pv_controller.go:1436] volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" deleted
I0907 20:41:43.697889       1 pv_controller.go:1284] deleteVolumeOperation [pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: success
I0907 20:41:43.705785       1 pv_protection_controller.go:205] Got event on PV pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc
I0907 20:41:43.705823       1 pv_protection_controller.go:125] Processing PV pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc
I0907 20:41:43.705886       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc" with version 3779
I0907 20:41:43.706050       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: phase: Failed, bound to: "azuredisk-8582/pvc-scm8s (uid: 2ecf8ed0-5b70-448d-82de-74cf0e01d7cc)", boundByController: true
I0907 20:41:43.706100       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: volume is bound to claim azuredisk-8582/pvc-scm8s
I0907 20:41:43.706117       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: claim azuredisk-8582/pvc-scm8s not found
I0907 20:41:43.706126       1 pv_controller.go:1108] reclaimVolume[pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc]: policy is Delete
I0907 20:41:43.706147       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]]
I0907 20:41:43.706169       1 pv_controller.go:1764] operation "delete-pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc[6dee7e7f-3bab-4609-95fb-732744a1327f]" is already running, skipping
I0907 20:41:43.711246       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-2ecf8ed0-5b70-448d-82de-74cf0e01d7cc
... skipping 46 lines ...
I0907 20:41:47.681038       1 pv_controller.go:1446] provisionClaim[azuredisk-7051/pvc-cz9mk]: started
I0907 20:41:47.681171       1 pv_controller.go:1753] scheduleOperation[provision-azuredisk-7051/pvc-cz9mk[bcd6299f-8994-49eb-8dea-eec839dbc110]]
I0907 20:41:47.681291       1 pv_controller.go:1764] operation "provision-azuredisk-7051/pvc-cz9mk[bcd6299f-8994-49eb-8dea-eec839dbc110]" is already running, skipping
I0907 20:41:47.683491       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 StorageAccountType:Standard_LRS Size:10
I0907 20:41:49.941379       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0907 20:41:49.960685       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-25s2p, uid f3ebd363-e6c2-4c4e-9c19-cc6d3501a840, event type delete
E0907 20:41:49.973603       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-ws46x" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0907 20:41:50.014942       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8582/default), service account deleted, removing tokens
I0907 20:41:50.015229       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (4.4µs)
I0907 20:41:50.015194       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8582, name default, uid 4bbbfb94-bae1-42ab-aa1a-09329c77c92d, event type delete
I0907 20:41:50.046833       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8582, name kube-root-ca.crt, uid fb3bbcce-6176-4ee5-9691-4d731eb33b23, event type delete
I0907 20:41:50.049577       1 publisher.go:181] Finished syncing namespace "azuredisk-8582" (2.415116ms)
I0907 20:41:50.061559       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 StorageAccountType:Standard_LRS Size:10
... skipping 71 lines ...
I0907 20:41:50.177718       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (244.351585ms)
I0907 20:41:50.535638       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0907 20:41:50.580445       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 18dd840c-a456-4afd-8955-0d52c8041e8f, event type delete
I0907 20:41:50.583265       1 publisher.go:181] Finished syncing namespace "azuredisk-7726" (2.760118ms)
I0907 20:41:50.627587       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-bmtfr, uid 6e9d5e02-cedb-4c44-8499-b637378a0eac, event type delete
I0907 20:41:50.641827       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
E0907 20:41:50.655200       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-7dh9r" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0907 20:41:50.691206       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 5bf59251-5fb9-4821-a656-108f8adc3bad, event type delete
I0907 20:41:50.691353       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0907 20:41:50.691456       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (4.1µs)
I0907 20:41:50.703395       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-jh5v5"
I0907 20:41:50.703536       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-jh5v5, PodDisruptionBudget controller will avoid syncing.
I0907 20:41:50.703547       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-jh5v5"
... skipping 22 lines ...
I0907 20:41:51.273078       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0907 20:41:51.284494       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (198.997191ms)
I0907 20:41:51.614446       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0907 20:41:51.691643       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid 13530b49-90c3-4a57-82b9-590756e2dbd0, event type delete
I0907 20:41:51.693772       1 publisher.go:181] Finished syncing namespace "azuredisk-1387" (1.868112ms)
I0907 20:41:51.707953       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-q6ntg, uid e34fab40-66ee-41ad-8818-95034aa9129f, event type delete
E0907 20:41:51.720640       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-2dqq4" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0907 20:41:51.749329       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0907 20:41:51.749444       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid 669209d0-df3e-4720-b6b1-c1b203cc6872, event type delete
I0907 20:41:51.749517       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (45.701µs)
I0907 20:41:51.764586       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.701µs)
I0907 20:41:51.764987       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1387, estimate: 0, errors: <nil>
I0907 20:41:51.775757       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1387" (165.090371ms)
... skipping 283 lines ...
I0907 20:42:57.636240       1 pv_controller.go:1108] reclaimVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: policy is Delete
I0907 20:42:57.636313       1 pv_controller.go:1232] deleteVolumeOperation [pvc-bcd6299f-8994-49eb-8dea-eec839dbc110] started
I0907 20:42:57.636375       1 pv_controller.go:1753] scheduleOperation[delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]]
I0907 20:42:57.636622       1 pv_controller.go:1764] operation "delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]" is already running, skipping
I0907 20:42:57.638953       1 pv_controller.go:1341] isVolumeReleased[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: volume is released
I0907 20:42:57.638972       1 pv_controller.go:1405] doDeleteVolume [pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]
I0907 20:42:57.639043       1 pv_controller.go:1260] deletion of volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110) since it's in attaching or detaching state
I0907 20:42:57.639057       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: set phase Failed
I0907 20:42:57.639066       1 pv_controller.go:858] updating PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: set phase Failed
I0907 20:42:57.642489       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" with version 3998
I0907 20:42:57.642520       1 pv_controller.go:879] volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" entered phase "Failed"
I0907 20:42:57.642667       1 pv_protection_controller.go:205] Got event on PV pvc-bcd6299f-8994-49eb-8dea-eec839dbc110
I0907 20:42:57.642531       1 pv_controller.go:901] volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110) since it's in attaching or detaching state
I0907 20:42:57.642735       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" with version 3998
I0907 20:42:57.643174       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: phase: Failed, bound to: "azuredisk-7051/pvc-cz9mk (uid: bcd6299f-8994-49eb-8dea-eec839dbc110)", boundByController: true
I0907 20:42:57.643335       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: volume is bound to claim azuredisk-7051/pvc-cz9mk
I0907 20:42:57.643457       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: claim azuredisk-7051/pvc-cz9mk not found
I0907 20:42:57.643473       1 pv_controller.go:1108] reclaimVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: policy is Delete
I0907 20:42:57.643518       1 pv_controller.go:1753] scheduleOperation[delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]]
E0907 20:42:57.643564       1 goroutinemap.go:150] Operation for "delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]" failed. No retries permitted until 2022-09-07 20:42:58.142990862 +0000 UTC m=+1578.431371106 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110) since it's in attaching or detaching state"
I0907 20:42:57.643892       1 event.go:291] "Event occurred" object="pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110) since it's in attaching or detaching state"
I0907 20:42:57.643928       1 pv_controller.go:1766] operation "delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]" postponed due to exponential backoff
I0907 20:42:57.699829       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 60 items received
I0907 20:42:57.830746       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-kddm5f-control-plane-pds8m"
I0907 20:42:58.645821       1 node_lifecycle_controller.go:1047] Node capz-kddm5f-control-plane-pds8m ReadyCondition updated. Updating timestamp.
I0907 20:43:00.185699       1 azure_controller_vmss.go:187] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110) returned with <nil>
I0907 20:43:00.185767       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110) succeeded
I0907 20:43:00.185779       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110 was detached from node:capz-kddm5f-mp-0000000
I0907 20:43:00.186030       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110") on node "capz-kddm5f-mp-0000000" 
I0907 20:43:01.168980       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="93.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44172" resp=200
I0907 20:43:04.414430       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 6 items received
I0907 20:43:08.248495       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:43:08.483496       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:43:08.483693       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" with version 3998
I0907 20:43:08.483771       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: phase: Failed, bound to: "azuredisk-7051/pvc-cz9mk (uid: bcd6299f-8994-49eb-8dea-eec839dbc110)", boundByController: true
I0907 20:43:08.483853       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: volume is bound to claim azuredisk-7051/pvc-cz9mk
I0907 20:43:08.483893       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: claim azuredisk-7051/pvc-cz9mk not found
I0907 20:43:08.483913       1 pv_controller.go:1108] reclaimVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: policy is Delete
I0907 20:43:08.483936       1 pv_controller.go:1753] scheduleOperation[delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]]
I0907 20:43:08.484019       1 pv_controller.go:1232] deleteVolumeOperation [pvc-bcd6299f-8994-49eb-8dea-eec839dbc110] started
I0907 20:43:08.493001       1 pv_controller.go:1341] isVolumeReleased[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: volume is released
... skipping 4 lines ...
I0907 20:43:13.754339       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110
I0907 20:43:13.754385       1 pv_controller.go:1436] volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" deleted
I0907 20:43:13.754399       1 pv_controller.go:1284] deleteVolumeOperation [pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: success
I0907 20:43:13.763634       1 pv_protection_controller.go:205] Got event on PV pvc-bcd6299f-8994-49eb-8dea-eec839dbc110
I0907 20:43:13.763681       1 pv_protection_controller.go:125] Processing PV pvc-bcd6299f-8994-49eb-8dea-eec839dbc110
I0907 20:43:13.764052       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" with version 4024
I0907 20:43:13.764099       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: phase: Failed, bound to: "azuredisk-7051/pvc-cz9mk (uid: bcd6299f-8994-49eb-8dea-eec839dbc110)", boundByController: true
I0907 20:43:13.764128       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: volume is bound to claim azuredisk-7051/pvc-cz9mk
I0907 20:43:13.764151       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: claim azuredisk-7051/pvc-cz9mk not found
I0907 20:43:13.764159       1 pv_controller.go:1108] reclaimVolume[pvc-bcd6299f-8994-49eb-8dea-eec839dbc110]: policy is Delete
I0907 20:43:13.764177       1 pv_controller.go:1753] scheduleOperation[delete-pvc-bcd6299f-8994-49eb-8dea-eec839dbc110[1f8b9bde-7697-4a9f-a4c4-e4de0905c49b]]
I0907 20:43:13.764202       1 pv_controller.go:1232] deleteVolumeOperation [pvc-bcd6299f-8994-49eb-8dea-eec839dbc110] started
I0907 20:43:13.769230       1 pv_controller.go:1244] Volume "pvc-bcd6299f-8994-49eb-8dea-eec839dbc110" is already being deleted
... skipping 147 lines ...
I0907 20:43:21.500958       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-kddm5f-mp-0000000, refreshing the cache
I0907 20:43:21.563726       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-kddm5f-dynamic-pvc-5bb1ce5b-4b14-42a1-90f7-a398cb9eb574. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5bb1ce5b-4b14-42a1-90f7-a398cb9eb574" to node "capz-kddm5f-mp-0000000".
I0907 20:43:21.616941       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5bb1ce5b-4b14-42a1-90f7-a398cb9eb574" lun 0 to node "capz-kddm5f-mp-0000000".
I0907 20:43:21.617005       1 azure_controller_vmss.go:101] azureDisk - update(capz-kddm5f): vm(capz-kddm5f-mp-0000000) - attach disk(capz-kddm5f-dynamic-pvc-5bb1ce5b-4b14-42a1-90f7-a398cb9eb574, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-kddm5f/providers/Microsoft.Compute/disks/capz-kddm5f-dynamic-pvc-5bb1ce5b-4b14-42a1-90f7-a398cb9eb574) with DiskEncryptionSetID()
I0907 20:43:22.875667       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0907 20:43:22.947622       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-hz5cg, uid 8f5f3977-0c67-48bc-98cb-087f931c915a, event type delete
E0907 20:43:22.964737       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-zrvgp" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0907 20:43:22.964811       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0907 20:43:22.964850       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid 11fa5956-b1f4-4365-982b-a9221a9c8300, event type delete
I0907 20:43:22.965135       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (4µs)
I0907 20:43:22.978670       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid 954b7c6e-3bf2-4fd6-a9ae-0b90721c6320, event type delete
I0907 20:43:22.981223       1 publisher.go:181] Finished syncing namespace "azuredisk-7051" (3.026618ms)
I0907 20:43:23.012607       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jh5v5.1712aeb65c23e734, uid 89e5820c-4095-41b0-a1c1-9dd876ef68e6, event type delete
... skipping 467 lines ...
I0907 20:44:50.532718       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9183
2022/09/07 20:44:50 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1367.907 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-bgt8x, container manager
STEP: Dumping workload cluster default/capz-kddm5f logs
Sep  7 20:46:33.520: INFO: Collecting logs for Linux node capz-kddm5f-control-plane-pds8m in cluster capz-kddm5f in namespace default

Sep  7 20:47:33.522: INFO: Collecting boot logs for AzureMachine capz-kddm5f-control-plane-pds8m

Failed to get logs for machine capz-kddm5f-control-plane-5sz9p, cluster default/capz-kddm5f: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 20:47:34.408: INFO: Collecting logs for Linux node capz-kddm5f-mp-0000000 in cluster capz-kddm5f in namespace default

Sep  7 20:48:34.410: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-kddm5f-mp-0

Sep  7 20:48:34.776: INFO: Collecting logs for Linux node capz-kddm5f-mp-0000001 in cluster capz-kddm5f in namespace default

Sep  7 20:49:34.778: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-kddm5f-mp-0

Failed to get logs for machine pool capz-kddm5f-mp-0, cluster default/capz-kddm5f: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-kddm5f kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-9lk6l
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-kddm5f-control-plane-pds8m
STEP: Creating log watcher for controller kube-system/kube-proxy-qcgwz, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-x6tw2, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-node-b9g7f
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-kddm5f-control-plane-pds8m, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-kddm5f-control-plane-pds8m
STEP: Creating log watcher for controller kube-system/kube-proxy-pwcmn, container kube-proxy
STEP: Collecting events for Pod kube-system/etcd-capz-kddm5f-control-plane-pds8m
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-kddm5f-control-plane-pds8m, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-b9g7f, container calico-node
STEP: failed to find events of Pod "etcd-capz-kddm5f-control-plane-pds8m"
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-g47mq, container coredns
STEP: Collecting events for Pod kube-system/kube-proxy-pwcmn
STEP: Collecting events for Pod kube-system/kube-proxy-qcgwz
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-r2nvz, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-g47mq
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-r2nvz
STEP: Fetching kube-system pod logs took 436.315239ms
STEP: failed to find events of Pod "kube-apiserver-capz-kddm5f-control-plane-pds8m"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-kddm5f-control-plane-pds8m, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-kddm5f-control-plane-pds8m
STEP: Collecting events for Pod kube-system/kube-proxy-x6tw2
STEP: Creating log watcher for controller kube-system/calico-node-6glfv, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-6glfv
STEP: Creating log watcher for controller kube-system/calico-node-92px8, container calico-node
... skipping 22 lines ...